2,139 research outputs found

    Optimum Allocation of Distributed Service Workflows with Probabilistic Real-Time Guarantees

    Get PDF
    This paper addresses the problem of optimum allocation of distributed real-time workflows with probabilistic service guarantees over a set of physical resources. The discussion focuses on how such a problem may be mathematically formalized, in terms of both constraints and objective function to be optimized, which also accounts for possible business rules for regulating the deployment of the workflows. The presented formal problem constitutes a probabilistic admission control test that may be run by a provider in order to decide whether or not it is worth to admit new workflows into the system and to decide what the optimum allocation of the workflow to the available resources is. Various options are presented, which may be plugged into the formal problem description, depending on the specific needs of individual workflows. The presented problem has been implemented using GAMS and has been tested under various solvers. An illustrative numerical example and an analysis of the results of the implemented model under realistic settings are presented

    Modelling and quantification of structural uncertainties in petroleum reservoirs assisted by a hybrid cartesian cut cell/enriched multipoint flux approximation approach

    Get PDF
    Efficient and profitable oil production is subject to make reliable predictions about reservoir performance. However, restricted knowledge about reservoir distributed properties and reservoir structure calls for History Matching in which the reservoir model is calibrated to emulate the field observed history. Such an inverse problem yields multiple history-matched models which might result in different predictions of reservoir performance. Uncertainty Quantification restricts the raised model uncertainties and boosts the model reliability for the forecasts of future reservoir behaviour. Conventional approaches of Uncertainty Quantification ignore large scale uncertainties related to reservoir structure, while structural uncertainties can influence the reservoir forecasts more intensely compared with petrophysical uncertainty. What makes the quantification of structural uncertainty impracticable is the need for global regridding at each step of History Matching process. To resolve this obstacle, we develop an efficient methodology based on Cartesian Cut Cell Method which decouples the model from its representation onto the grid and allows uncertain structures to be varied as a part of History Matching process. Reduced numerical accuracy due to cell degeneracies in the vicinity of geological structures is adequately compensated with an enhanced scheme of class Locally Conservative Flux Continuous Methods (Extended Enriched Multipoint Flux Approximation Method abbreviated to extended EMPFA). The robustness and consistency of proposed Hybrid Cartesian Cut Cell/extended EMPFA approach are demonstrated in terms of true representation of geological structures influence on flow behaviour. In this research, the general framework of Uncertainty Quantification is extended and well-equipped by proposed approach to tackle uncertainties of different structures such as reservoir horizons, bedding layers, faults and pinchouts. Significant improvements in the quality of reservoir recovery forecasts and reservoir volume estimation are presented for synthetic models of uncertain structures. Also this thesis provides a comparative study of structural uncertainty influence on reservoir forecasts among various geological structures

    CSP for Executable Scientific Workflows

    Get PDF

    Generic business process modelling framework for quantitative evaluation

    Get PDF
    PhD ThesisBusiness processes are the backbone of organisations used to automate and increase the efficiency and effectiveness of their services and prod- ucts. The rapid growth of the Internet and other Web based technologies has sparked competition between organisations in attempting to provide a faster, cheaper and smarter environment for customers. In response to these requirements, organisations are examining how their business processes may be evaluated so as to improve business performance. This thesis proposes a generic framework to expand the applicability of various quantitative evaluation to a large class of business processes. The framework introduces a novel engineering methodology that defines a modelling formalism to represent business processes that can be solved for a set of performance and optimisation algorithms. The methodology allows various types of algorithms used in model-based business pro- cess improvement and optimisation to be plugged in a single modelling formalism. As a part of the framework, a generic modelling formalism (MWF-wR) is developed to represent business processes so as to allow quantitative evaluation and to select the parameters for the associated performance evaluation and optimisation. The generic framework is designed and implemented by developing soft- ware support tools using Java as object oriented programming language combining three main modules: (i) a business process specification mod- ule to define the components of the business process model, (ii) a stochas- tic Petri net module to map the business process model to a stochastic Petri net, and (iii) an algorithms module to solve the models for various performance optimisation objectives. Furthermore, a literature survey of different aspects of business processes including modelling and analy- sis techniques provides an overview of the current state of research and highlights gaps in business process modelling and performance analy- sis. Finally, experiments are introduced to investigate the validity of the presented approach

    Computing the Parallelism Degree of Timed BPMN Processes

    Get PDF
    International audienceA business process is a combination of structured and related activities that aim at fulfilling a specific organizational goal for a customer or market. An important measure when developing a business process is the degree of parallelism, namely, the maximum number of tasks that are executable in parallel at any given time in a process. This measure determines the peak demand on tasks and thus can provide valuable insight on the problem of resource allocation in business processes. This paper considers timed business processes modeled in BPMN, a workflow-based graphical notation for processes, where execution times can be associated to several BPMN constructs such as tasks and flows. An encoding of timed business processes into Maude's rewriting logic system is presented, enabling the automatic computation of timed degrees of parallelism for business processes. The approach is illustrated with a simple yet realistic case study in which the degree of parallelism is used to improve the business process design with the ultimate goal of optimizing resources and, therefore, with the potential for reducing operating costs

    Toward a decision support system for the clinical pathways assessment

    Get PDF
    This paper presents a decision support system to be used in hospital management taskswhich is based on the clinical pathways. We propose a very simple graphical modeling lan-guage based on a small number of primitive elements through which the medical doctorscould introduce a clinical pathway for a specific disease. Three essential aspects relatedto a clinical pathway can be specified in this language: (1) patient flow; (2) resource uti-lization; and (3) information interchange. This high-level language is a domain specificmodeling language calledHealthcare System Specification (HSS), and it is defined as anUnified Modeling Language (UML) profile. A model to model transformation is also pro-posed in order to obtain, from the pathways HSS specification, a Stochastic Well-formedNet (SWN) model that enables a formal analysis of the modeled system and, if needed, toapply synthesis methods enforcing specified requirements. The transformation is based onthe application of local rules. The clinical pathway of hip fracture from the “Lozano Blesa”University hospital in Zaragoza is taken as an example

    Analysis of single-cell RNA-Seq reveals dynamic changes during macrophage state transition in atherosclerosis mouse model

    Get PDF
    Background: Atherosclerosis is an arterial inflammation that causes ischemic heart disease, which is the first leading cause of death worldwide. Macrophages play major roles during disease development by having pro-inflammatory and anti-inflammatory functions. Lack of effective treatment is mainly due to incomplete understanding of the molecular mechanisms underlying disease progression and regression. Materials and methods: The transcripts of the macrophages from two aortic samples from atherosclerotic region during disease progression and regression were analyzed using previously published dataset (GEO Accession GSE123587). Pre-processing, clustering of cells and identification of unique markers for each cluster were done using Seurat package implemented in R programming language. Monocle package was used to order the cells in pseudotime and to detect the key molecules that changed dramatically during comparison between distinct macrophages states (pro-inflammatory and anti-inflammatory). Ingenuity Pathway Analysis (IPA) software was used to analyze the pathways activity across macrophage states along the trajectory and to retrieve the transcriptional regulatory network between the genes determining the final states. Prediction of the miRNAs that might be involved in the disease progression was performed using TargetScan and GSEA (Gene Set Enrichment Analysis). Cytoscape application was used to visualize the regulatory network between the differentially regulated genes across macrophages states. Results: Clustering analysis of macrophages revealed their presence in distinct 11 states. In addition, Two states were found to be dominant in the progression group macrophages, and one state was found to be dominant in the regression group macrophages. Moreover, trajectory analysis showed a bifurcation point near the end of the trajectory, where macrophages fates were destined to be either pro-inflammatory or anti-inflammatory. Macrophages unique to the disease progression branch were found to activate STAT cascade, induce acute inflammatory response and upregulate inflammatory cytokines, denoting M1 polarization. In contrast, regression-branch specific macrophages were found to activate cholesterol efflux pathways and upregulate anti-inflammatory cytokines such as TSLP and CCL24. The transcription regulatory network between differentially regulated genes in both branches revealed changes in the transcriptional dynamics acquired during macrophage states transition. STAT1 (Signal transducer and activator of transcription 1) and IRF7 (Interferon Regulatory Factor 7) were found to be upregulated in the progression branch to maintain an inflammatory module resulting in production of distinct inflammatory cytokines. On the other hand, MAFB (MAF BZIP Transcription Factor B) and IGF1 (Insulin-like growth factor 1) were found to be upregulated in the regression branch to interrupt the inflammatory module at different levels. In addition, 10 miRNAs were predicted to be unregulated in progression-branch specific macrophages such as miR-344, miR-346 and miR-485. Conclusion: Inflammatory sites in atherosclerosis lesions contain both pro-inflammatory and anti-inflammatory macrophages. Each subset of macrophage activates unique transcriptional program. Certain transcription factors and growth factors have potential to alter the whole transcriptional regulatory network, thereby shifting the macrophages from inflammatory to anti-inflammatory state. Understanding how macrophage state transition occurs from inflammatory to anti-inflammatory state will be a key step to better understanding and treating atherosclerosis

    CLIP and complementary methods

    Get PDF
    RNA molecules start assembling into ribonucleoprotein (RNP) complexes during transcription. Dynamic RNP assembly, largely directed by cis-acting elements on the RNA, coordinates all processes in which the RNA is involved. To identify the sites bound by a specific RNA-binding protein on endogenous RNAs, cross-linking and immunoprecipitation (CLIP) and complementary, proximity-based methods have been developed. In this Primer, we discuss the main variants of these protein-centric methods and the strategies for their optimization and quality assessment, as well as RNA-centric methods that identify the protein partners of a specific RNA. We summarize the main challenges of computational CLIP data analysis, how to handle various sources of background and how to identify functionally relevant binding regions. We outline the various applications of CLIP and available databases for data sharing. We discuss the prospect of integrating data obtained by CLIP with complementary methods to gain a comprehensive view of RNP assembly and remodelling, unravel the spatial and temporal dynamics of RNPs in specific cell types and subcellular compartments and understand how defects in RNPs can lead to disease. Finally, we present open questions in the field and give directions for further development and applications

    Network-based methods for biological data integration in precision medicine

    Full text link
    [eng] The vast and continuously increasing volume of available biomedical data produced during the last decades opens new opportunities for large-scale modeling of disease biology, facilitating a more comprehensive and integrative understanding of its processes. Nevertheless, this type of modelling requires highly efficient computational systems capable of dealing with such levels of data volumes. Computational approximations commonly used in machine learning and data analysis, namely dimensionality reduction and network-based approaches, have been developed with the goal of effectively integrating biomedical data. Among these methods, network-based machine learning stands out due to its major advantage in terms of biomedical interpretability. These methodologies provide a highly intuitive framework for the integration and modelling of biological processes. This PhD thesis aims to explore the potential of integration of complementary available biomedical knowledge with patient-specific data to provide novel computational approaches to solve biomedical scenarios characterized by data scarcity. The primary focus is on studying how high-order graph analysis (i.e., community detection in multiplex and multilayer networks) may help elucidate the interplay of different types of data in contexts where statistical power is heavily impacted by small sample sizes, such as rare diseases and precision oncology. The central focus of this thesis is to illustrate how network biology, among the several data integration approaches with the potential to achieve this task, can play a pivotal role in addressing this challenge provided its advantages in molecular interpretability. Through its insights and methodologies, it introduces how network biology, and in particular, models based on multilayer networks, facilitates bringing the vision of precision medicine to these complex scenarios, providing a natural approach for the discovery of new biomedical relationships that overcomes the difficulties for the study of cohorts presenting limited sample sizes (data-scarce scenarios). Delving into the potential of current artificial intelligence (AI) and network biology applications to address data granularity issues in the precision medicine field, this PhD thesis presents pivotal research works, based on multilayer networks, for the analysis of two rare disease scenarios with specific data granularities, effectively overcoming the classical constraints hindering rare disease and precision oncology research. The first research article presents a personalized medicine study of the molecular determinants of severity in congenital myasthenic syndromes (CMS), a group of rare disorders of the neuromuscular junction (NMJ). The analysis of severity in rare diseases, despite its importance, is typically neglected due to data availability. In this study, modelling of biomedical knowledge via multilayer networks allowed understanding the functional implications of individual mutations in the cohort under study, as well as their relationships with the causal mutations of the disease and the different levels of severity observed. Moreover, the study presents experimental evidence of the role of a previously unsuspected gene in NMJ activity, validating the hypothetical role predicted using the newly introduced methodologies. The second research article focuses on the applicability of multilayer networks for gene priorization. Enhancing concepts for the analysis of different data granularities firstly introduced in the previous article, the presented research provides a methodology based on the persistency of network community structures in a range of modularity resolution, effectively providing a new framework for gene priorization for patient stratification. In summary, this PhD thesis presents major advances on the use of multilayer network-based approaches for the application of precision medicine to data-scarce scenarios, exploring the potential of integrating extensive available biomedical knowledge with patient-specific data

    A Calculus for Orchestration of Web Services

    Get PDF
    Service-oriented computing, an emerging paradigm for distributed computing based on the use of services, is calling for the development of tools and techniques to build safe and trustworthy systems, and to analyse their behaviour. Therefore, many researchers have proposed to use process calculi, a cornerstone of current foundational research on specification and analysis of concurrent, reactive, and distributed systems. In this paper, we follow this approach and introduce CWS, a process calculus expressly designed for specifying and combining service-oriented applications, while modelling their dynamic behaviour. We show that CWS can model all the phases of the life cycle of service-oriented applications, such as publication, discovery, negotiation, orchestration, deployment, reconfiguration and execution. We illustrate the specification style that CWS supports by means of a large case study from the automotive domain and a number of more specific examples drawn from it
    corecore