29 research outputs found

    Análisis de workflows científicos mediante el paradigma de las U-RDF-PN

    Get PDF
    La computación científica ha ganado un creciente interés en los últimos años en áreas afines a las ciencias de la vida. Los workflows científicos son un tipo especial de workflow utilizados en escenarios de grandes dimensiones y gran complejidad computacional, como modelos climáticos, estructuras biológicas, química, cirugía o simulación de desastres, por ejemplo. La computación científica ha mejorado progresivamente a través de la introducción de nuevos paradigmas y tecnologías para poder abordar desafíos cada vez más complejos. Esta línea de investigación se centra en la adición de aspectos semánticos al área de la computación científica, aportando un método de model checking basado en la introducción de aspectos y anotaciones semánticas tanto en los modelos como en las fórmulas que deben verificarse. La adición de aspectos semánticos flexibiliza la especificación y facilita la integración de servicios remotos y heterogéneos. Como punto de partida, el desarrollo del toolkit COMBAS (COmprobador de Modelos Basado en Semántica) proporciona un entorno que integra herramientas para la verificación de modelos que incluyen información semántica y la navegación por las estructuras resultante del proceso. Para la descripción de modelos de workflows científicos se utiliza una clase de Redes de Petri de alto nivel anotadas con información semántica, las U-RDF-PN, en las que el Grupo de Integración de Sistemas Distribuidos y Heterogéneos (GIDHE), donde el autor desarrolla su línea de investigación, tiene una amplia y demostrada experiencia. Finalmente, el enfoque propuesto se aplicará a una serie de problemas reales de relevancia en el mundo científico para demostrar su utilidad y viabilidad

    Parallel computation of the reachability graph of petri net models with semantic information

    Get PDF
    Formal verification plays a crucial role when dealing with correctness of systems. In a previous work, the authors proposed a class of models, the Unary Resource Description Framework Petri Nets (U-RDF-PN), which integrated Petri nets and (RDF-based) semantic information. The work also proposed a model checking approach for the analysis of system behavioural properties that made use of the net reachability graph. Computing such a graph, specially when dealing with high-level structures as RDF graphs, is a very expensive task that must be considered. This paper describes the development of a parallel solution for the computation of the reachability graph of U-RDF-PN models. Besides that, the paper presents some experimental results when the tool was deployed in cluster and cloud frameworks. The results not only show the improvement in the total time required for computing the graph, but also the high scalability of the solution, which make it very useful thanks to the current (and future) availability of cloud infrastructures

    CIBERER : Spanish national network for research on rare diseases: A highly productive collaborative initiative

    Get PDF
    Altres ajuts: Instituto de Salud Carlos III (ISCIII); Ministerio de Ciencia e Innovación.CIBER (Center for Biomedical Network Research; Centro de Investigación Biomédica En Red) is a public national consortium created in 2006 under the umbrella of the Spanish National Institute of Health Carlos III (ISCIII). This innovative research structure comprises 11 different specific areas dedicated to the main public health priorities in the National Health System. CIBERER, the thematic area of CIBER focused on rare diseases (RDs) currently consists of 75 research groups belonging to universities, research centers, and hospitals of the entire country. CIBERER's mission is to be a center prioritizing and favoring collaboration and cooperation between biomedical and clinical research groups, with special emphasis on the aspects of genetic, molecular, biochemical, and cellular research of RDs. This research is the basis for providing new tools for the diagnosis and therapy of low-prevalence diseases, in line with the International Rare Diseases Research Consortium (IRDiRC) objectives, thus favoring translational research between the scientific environment of the laboratory and the clinical setting of health centers. In this article, we intend to review CIBERER's 15-year journey and summarize the main results obtained in terms of internationalization, scientific production, contributions toward the discovery of new therapies and novel genes associated to diseases, cooperation with patients' associations and many other topics related to RD research

    Nurses' perceptions of aids and obstacles to the provision of optimal end of life care in ICU

    Get PDF
    Contains fulltext : 172380.pdf (publisher's version ) (Open Access

    Redo log process mining in real life:data challenges & opportunities

    No full text
    \u3cp\u3eData extraction and preparation are the most time-consuming phases of any process mining project. Due to the variability on the sources of event data, it remains a highly manual process in most of the cases. Moreover, it is very difficult to obtain reliable event data in enterprise systems that are not process-aware. Some techniques, like redo log process mining, try to solve these issues by automating the process as much as possible, and enabling event extraction in systems that are not process aware. This paper presents the challenges faced by redo log, and traditional process mining, comparing both approaches at theoretical and practical levels. Finally, we demonstrate that the data obtained with redo log process mining in a real-life environment is, at least, as valid as the one extracted by the traditional approach.\u3c/p\u3

    Connecting databases with process mining: a meta model and toolset

    No full text
    Process mining techniques require event logs which, in many cases, are obtained from databases. Obtaining these event logs is not a trivial task and requires substantial domain knowledge. In addition, an extracted event log provides only a single view on the database. To change our view, e.g., to focus on another business process and generate another event log, it is necessary to go back to the source of data. This paper proposes a meta model to integrate both process and data perspectives, relating one to the other. It can be used to generate different views from the database at any moment in a highly flexible way. This approach decouples the data extraction from the application of analysis techniques, enabling the application of process mining in different contexts

    Case notion discovery and recommendation: automated event log building on databases

    Get PDF
    Process mining techniques use event logs as input. When analyzing complex databases, these event logs can be built in many ways. Events need to be grouped into traces corresponding to a case. Different groupings provide different views on the data. Building event logs is usually a time-consuming, manual task. This paper provides a precise view on the case notion on databases, which enables the automatic computation of event logs. Also, it provides a way to assess event log quality, used to rank event logs with respect to their interestingness. The computational cost of building an event log can be avoided by predicting the interestingness of a case notion, before the corresponding event log is computed. This makes it possible to give recommendations to users, so they can focus on the analysis of the most promising process views. Finally, the accuracy of the predictions and the quality of the rankings generated by our unsupervised technique are evaluated in comparison to the existing regression techniques as well as to state-of-the-art learning to rank algorithms from the information retrieval field. The results show that our prediction technique succeeds at discovering interesting event logs and provides valuable recommendations to users about the perspectives on which to focus the efforts during the analysis

    Case notion discovery and recommendation: automated event log building on databases

    Get PDF
    Process mining techniques use event logs as input. When analyzing complex databases, these event logs can be built in many ways. Events need to be grouped into traces corresponding to a case. Different groupings provide different views on the data. Building event logs is usually a time-consuming, manual task. This paper provides a precise view on the case notion on databases, which enables the automatic computation of event logs. Also, it provides a way to assess event log quality, used to rank event logs with respect to their interestingness. The computational cost of building an event log can be avoided by predicting the interestingness of a case notion, before the corresponding event log is computed. This makes it possible to give recommendations to users, so they can focus on the analysis of the most promising process views. Finally, the accuracy of the predictions and the quality of the rankings generated by our unsupervised technique are evaluated in comparison to the existing regression techniques as well as to state-of-the-art learning to rank algorithms from the information retrieval field. The results show that our prediction technique succeeds at discovering interesting event logs and provides valuable recommendations to users about the perspectives on which to focus the efforts during the analysis

    Redo log process mining in real life: data challenges & opportunities

    No full text
    Data extraction and preparation are the most time-consuming phases of any process mining project. Due to the variability on the sources of event data, it remains a highly manual process in most of the cases. Moreover, it is very difficult to obtain reliable event data in enterprise systems that are not process-aware. Some techniques, like redo log process mining, try to solve these issues by automating the process as much as possible, and enabling event extraction in systems that are not process aware. This paper presents the challenges faced by redo log, and traditional process mining, comparing both approaches at theoretical and practical levels. Finally, we demonstrate that the data obtained with redo log process mining in a real-life environment is, at least, as valid as the one extracted by the traditional approach
    corecore