46 research outputs found

    Flexible Support of Healthcare Processes

    Get PDF
    Traditionally, healthcare information systems have focused on the support of predictable and repetitive clinical processes. Even though the latter can be often prespecified in formal process models, process flexibility in terms of dynamic adaptability is indispensable to cope with exceptions and unforeseen situations. Flexibility is further required to accommodate the need for evolving healthcare processes and to properly support healthcare process variability. In addition, process-aware information systems are increasingly used to support less structured healthcare processes (i.e., patient treatment processes), which can be characterized as knowledge-intensive. Healthcare processes of this category are neither fully predictable nor repetitive and, therefore, they cannot be fully prespecified at design time. The partial unpredictability of these processes, in turn, demands a certain amount of looseness. This chapter deals with the characteristic flexibility needs of both prespecified and loosely specified healthcare processes. In addition, it presents fundamental flexibility features required to address these flexibility needs as well as to accommodate them in healthcare practice

    Safeguarding against new privacy threats in inter-enterprise collaboration environments

    Get PDF
    Inter-enterprise collaboration has become essential for the success of enterprises. As competition increasingly takes place between supply chains and networks of enterprises, there is a strategic business need to participate in multiple collaborations simultaneously. Collaborations based on an open market of autonomous actors set special requirements for computing facilities supporting the setup and management of these business networks of enterprises. Currently, the safeguards against privacy threats in collaborations crossing organizational borders are both insufficient and incompatible to the open market. A broader understanding is needed of the architecture of defense structures, and privacy threats must be detected not only on the level of a private person or enterprise, but on the community and ecosystem levels as well. Control measures must be automated wherever possible in order to keep the cost and effort of collaboration management reasonable. This article contributes to the understanding of the modern inter-enterprise collaboration environment and privacy threats in it, and presents the automated control measures required to ensure that actors in inter-enterprise collaborations behave correctly to preserve privacy.Peer reviewe

    A first approach to the automatic generation of service graphs for building trust

    Get PDF
    In recent years, web services have turned out to be an emerging feature that is transforming how the web is conceptualized. Rather than considering the Web as a huge collection of static pages, for many purposes the WWW can be better understood as a collection of entities that provide and use services. In this setting, graph-based representations for modelling trustworthiness in a provider-consumer framework for agents have proven to be an attractive approach. However, computing and mantaining the underlying graph may be a considerably complex task. This paper presents a first approach towards computing such service graphs automatically on the basis of so-called concept lattices. Our proposal is intended to enhance existing service-graph representations for modelling trust.Eje: VI Workshop de Agentes y Sistemas Inteligentes (WASI)Red de Universidades con Carreras en Informática (RedUNCI

    Quality-aware architectural model transformations in adaptive mashups user interfaces

    Get PDF
    The final publication is available at IOS Press through http://dx.doi.org/10.3233/FI-2016-0000Mashup user interfaces provides their functionality through the combination of different services. The integration of such services can be solved by using reusable and third-party components. Furthermore, these interfaces must be adapted to user preferences, context changes, user interactions and component availability. Model transformation is a useful mechanism to address this adaptation but normally these operations only focus on the functional requirements. In this sense, quality attributes should be included in the adaptation process to obtain the best adapted mashup user interface. This paper proposes a generic quality-aware transformation process to support the adaptation of software architectures. The transformation process has been applied in ENIA, a geographic information system, by constructing a specific quality model for the adaptation of mashup user interfaces. This model is taken into account for evaluating the different transformation alternatives and choosing the one that maximizes the quality assessments. The approach has been validated by a set of adaptation scenarios that are intended to maximize different quality factors and therefore apply distinct combinations of metrics.Peer ReviewedPostprint (author's final draft

    A first approach to the automatic generation of service graphs for building trust

    Get PDF
    In recent years, web services have turned out to be an emerging feature that is transforming how the web is conceptualized. Rather than considering the Web as a huge collection of static pages, for many purposes the WWW can be better understood as a collection of entities that provide and use services. In this setting, graph-based representations for modelling trustworthiness in a provider-consumer framework for agents have proven to be an attractive approach. However, computing and mantaining the underlying graph may be a considerably complex task. This paper presents a first approach towards computing such service graphs automatically on the basis of so-called concept lattices. Our proposal is intended to enhance existing service-graph representations for modelling trust.Eje: VI Workshop de Agentes y Sistemas Inteligentes (WASI)Red de Universidades con Carreras en Informática (RedUNCI

    Peer Data Management

    Get PDF
    Peer Data Management (PDM) deals with the management of structured data in unstructured peer-to-peer (P2P) networks. Each peer can store data locally and define relationships between its data and the data provided by other peers. Queries posed to any of the peers are then answered by also considering the information implied by those mappings. The overall goal of PDM is to provide semantically well-founded integration and exchange of heterogeneous and distributed data sources. Unlike traditional data integration systems, peer data management systems (PDMSs) thereby allow for full autonomy of each member and need no central coordinator. The promise of such systems is to provide flexible data integration and exchange at low setup and maintenance costs. However, building such systems raises many challenges. Beside the obvious scalability problem, choosing an appropriate semantics that can deal with arbitrary, even cyclic topologies, data inconsistencies, or updates while at the same time allowing for tractable reasoning has been an area of active research in the last decade. In this survey we provide an overview of the different approaches suggested in the literature to tackle these problems, focusing on appropriate semantics for query answering and data exchange rather than on implementation specific problems

    Measuring Expert Performance at Manually Classifying Domain Entities under Upper Ontology Classes

    Full text link
    Classifying entities in domain ontologies under upper ontology classes is a recommended task in ontology engineering to facilitate semantic interoperability and modelling consistency. Integrating upper ontologies this way is difficult and, despite emerging automated methods, remains a largely manual task. Little is known about how well experts perform at upper ontology integration. To develop methodological and tool support, we first need to understand how well experts do this task. We designed a study to measure the performance of human experts at manually classifying classes in a general knowledge domain ontology with entities in the Basic Formal Ontology (BFO), an upper ontology used widely in the biomedical domain. We conclude that manually classifying domain entities under upper ontology classes is indeed very difficult to do correctly. Given the importance of the task and the high degree of inconsistent classifications we encountered, we further conclude that it is necessary to improve the methodological framework surrounding the manual integration of domain and upper ontologies
    corecore