13 research outputs found

    Terveydenhuollon tietojärjestelmien arkkitehtuurit ja standardit

    Get PDF
    Tutkielmassa tarkastellaan terveydenhuollon tietojärjestelmien järjestelmäarkkitehtuureja ja terveydenhuollon tietotekniikan standardeja. Tavoitteena on luoda kokonaiskuva alalla käytetyistä käsitteistä ja niiden välisistä yhteyksistä, terveydenhuollon tietojärjestelmien arkkitehtuureista sekä selvittää, miten niissä noudatetaan erilaisia standardeja. Arkkitehtuurikuvausten lähteinä on käytetty verkosta löytyvää järjestelmien toimittajien ylläpitämää dokumentaatiota ja niihin liittyviä tieteellisiä julkaisuja. Tietojärjestelmien toteutuksissa voidaan erottaa karkeasti kolme erilaista arkkitehtuurimallia: federoitu malli, palvelukeskeinen malli ja keskitetty malli. Federoidussa arkkitehtuurimallissa tiedot koostetaan yhdeksi kokonaisuudeksi useasta eri lähteestä. Palvelukeskeisessä mallissa erilaiset järjestelmät viestivät keskenään yhteisen palvelurajapinnan välityksellä. Keskitetyssä mallissa järjestelmä muodostaa yhden kokonaisuuden, joten integraatiota muihin järjestelmiin ei juuri tarvita. Näistä palvelukeskeinen malli on kaikkein modernein ja soveltuu tarkastelun perusteella hyvin terveydenhuollon tietojärjestelmien toteutukseen, sillä heterogeenisten järjestelmien integrointi on siinä keskeisellä sijalla. Käytössä olevien tietojärjestelmäratkaisujen perusteella tarkastellaan lähemmin standardeja neljästä eri kategoriasta. Arkkitehtuuriin liittyviä standardeja ovat RM-ODP-viitemalli, potilastietojärjestelmien standardi ISO 18308 ja kokonaisarkkitehtuuri HISA. Potilaskertomuksiin liittyviä standardeja ovat CEN/ISO 13606, OpenEHR ja ISO 20514. Sanomanvälitykseen kehitettyjä standardeja ovat HL7 versiot 2 ja 3 sekä CDA R2. Näiden yhteydessä käsitellään lisäksi HL7 RIM -viitetietomallia, joka on kaikkien HL7 versioon 3 liittyvien standardien perusta. Luokitusstandardeista käsitellään SNOMED CT -terminologiaa ja ICD-10-tautiluokitusta. Tarkastellut standardit ovat pääosin yhteensopivia, sillä niiden kehityksessä on huomioitu mahdollinen yhteiskäyttö ja niillä on paljon keskinäisiä viittauksia. Ainoastaan HL7 versio 2 on ristiriidassa uudempien HL7-standardien kanssa. Standardien joustavuuden haittapuoleksi osoittautuu erilaisten tulkintojen ristiriitaisuus standardien toteutuksessa. Terveydenhuollon tietojärjestelmien yhteentoimivuuden ongelmia ei voida ratkaista ilman arkkitehtuurista kokonaiskuvaa standardien ja järjestelmien kehityksessä

    Toward Shared Understanding : An Argumentation Based Approach for Communication in Open Multi-Agent Systems

    Get PDF
    Open distributed computing applications are becoming increasingly commonplace nowadays. In many cases, these applications are composed of multiple autonomous agents, each with its own aims and objectives. In such complex systems, communication between these agents is usually essential for them to perform their task, to coordinate their actions and share their knowledge. However, successful and meaningful communication can only be achieved by a shared understanding of each other's messages. Therefore efficient mechanisms are needed to reach a mutual understanding when exchanging expressions from each other's world model and background knowledge. We believe the de facto mechanisms for achieving this are ontologies, and this is the area explored in this thesis [88]. However, supporting shared understanding mechanisms for open distributed applications is a major research challenge. Specifically, one consequence of a system being open is the heterogeneity of the agents. Agents may have conflicting goals, or may be heterogeneous with respect to their beliefs or their knowledge. Forcing all agents to use a common vocabulary defined in one or more shared ontologies is, thus, an oversimplified solution, particularly when these agents are designed and deployed independently of each other. This thesis proposes a novel approach to overcome vocabulary heterogeneity, where the agents dynamically negotiate the meaning of the terms they use to communicate. While many proposals for aligning two agent ontologies have been presented in the literature as the current standard approaches to resolve heterogeneity, they are lacking when dealing with important features of agents and their environment. Motivated by the hypothesis that ontology alignment approaches should reflect the characteristics of autonomy and rationality that are typical of agents, and should also be tailored to the requirements of an open environment, such as dynamism, we propose a way for agents to define and agree upon the semantics of the terms used at run-time, according to their interests and preferences. Since agents are autonomous and represent different stakeholders, the process by which they come to an agreement will necessarily only come through negotiation. By using argumentation theory, agents generate and exchange different arguments, that support or reject possible mappings between vocabularies, according to their own preferences. Thus, this work provides a concrete instantiation of the meaning negotiation process that we would like agents to achieve, and that may lead to shared understanding. Moreover, in contrast to current ontology alignment approaches, the choice of a mapping is based on two clearly identified elements: (i) the argumentation framework, which is common to all agents, and (ii) the preference relations, which are private to each agent. Despite the large body of work in the area of semantic interoperabiJity, we are not aware of any research in this area that has directly addressed these important requirements for open Multi-Agent Systems as we have done in this thesis. Supplied by The British Library - 'The world's knowledge

    Design and implementation of a human-robot collaborative assembly workstation in a modular robotized production line

    Get PDF
    Over the last decades, the Industrial Automation domain at factory shop floors experienced an exponential growth in the use of robots. The objective of such change aims to increase the efficiency at reasonable cost. However, not all the tasks formerly performed by humans in factories, are fully substituted by robots nowadays, specially the ones requiring high-level of dexterity. In fact, Europe is moving towards implementing efficient work spaces were humans can work safely, aided by robots. In this context, industrial and research sectors have ambitious plans to achieve solutions that involve coexistence and simultaneity at work between humans and collaborative robots, a.k.a. “cobots” or co-robots, for permitting a safe interaction for the same or interrelated manufacturing processes. Many cobot producers started to present their products, but those arrived before the industry have clear and several needs of this particular technology. This work presents an approach about how to demonstrate human-robot collaborative manufacturing? How to implement a dual-arm human-robot collaborative workstation? How to integrate a human-robot collaborative workstation into a modular interconnected production line? and What are the advantages and challenges of current HRC technologies at the shop floor? by documenting the formulation of a human-robot collaborative assembly process, implemented by designing and building an assembly workstation that exemplifies a scenario of interaction between a dual arm cobot and a human operator, in order to assembly a product box, as a part of a large-scale modular robotized production line. The model produced by this work is part of the research facilities at the Future Automation Systems and Technologies Laboratory in Tampere University

    Multi-agent based architecture for digital libraries

    Get PDF
    Digital Libraries (DL) generally contain a collection of independently maintained data sets, in different formats, which may be queried by geographically dispersed users. The general problem of managing such large digital data archives is particularly challenging when the system must cope with data which is processed on demand. This dissertation proposes a Multi-Agent System (MAS) architecture for the utilisation of an active DL that provides computing services in addition to data-retrieval services, so that users can initiate computing jobs on remote supercomputers for processing, mining, and filtering of the data in the library. The system architecture is based on a collaborative set of agents, where each agent undertakes a pre-defined role, and is responsible for offering a particular type of service. The integration of services is based on a user defined query which can range in complexity from simple queries, to specialised algorithms which are transmitted to image processing archives as mobile agents. The proposed architecture enables new information sources and services to be integrated into the system dynamically, supports autonomous and dynamic on-demand data processing based on collaboration between agents, capable of handling a large number of concurrent users. Focus is based on the management of mobile agents which roam through the servers that constitute the DL to serve user queries. A new load balancing scheme is proposed for managing agent load among the available servers, based on the system state information and predictions about lifetime of agent tasks and server status. The system architecture is further extended by defining a gateway to provide interoperability with other heterogeneous agent-based systems. Interoperability in this sense enables agents from different types of platforms to communicate between themselves and use services provided by other systems. The novelty of the proposed gateway approach lies in the ability to adapt an existing legacy system for use with the agent-based approach (and one that adheres to FIPA standards). A prototype has been developed as a proof-of-concept to outline the principles and ideas involved, with reference to the Synthetic Aperture Radar Atlas (SARA) DL composed of multi-spectral remote-sensing imagery of the Earth. Although, the work presented in this dissertation has been evaluated in the context of SARA DL, the proposed techniques suggest useful guidelines that may be employed by other active archival systems

    Smart data management with BIM for Architectural Heritage

    Get PDF
    In the last years smart buildings topic has received much attention as well as Building Information Modelling (BIM) and interoperability as independent fields. Linking these topics is an essential research target to help designers and stakeholders to run processes more efficiently. Working on a smart building requires the use of Innovation and Communication Technology (ICT) to optimize design, construction and management. In these terms, several technologies such as sensors for remote monitoring and control, building equipment, management software, etc. are available in the market. As BIM provides an enormous amount of information in its database and theoretically it is able to work with all kind of data sources using interoperability, it is essential to define standards for both data contents and format exchange. In this way, a possibility to align research activity with Horizon 2020 is the investigation of energy saving using ICT. Unfortunately, comparing the Architecture Engineering and Construction (AEC) Industry with other sectors it is clear how in the building field advanced information technology applications have not been adopted yet. However in the last years, the adoption of new methods for the data management has been investigated by many researchers. So, basing on the above considerations, the main purpose of this thesis is investigate the use of BIM methodology relating to existing buildings concerning on three main topics: • Smart data management for architectural heritage preservation; • District data management for energy reduction; • The maintenance of highrises. For these reasons, data management acquires a very important value relating to the optimization of the building process and it is considered the most important goal for this research. Taking into account different kinds of architectural heritage, the attention is focused on the existing and historical buildings that usually have characterized by several constraints. Starting from data collection, a BIM model was developed and customized in function of its objectives, and providing information for different simulation tests. Finally, data visualization was investigated through the Virtual Reality(VR) and Augmented Reality (AR). Certainly, the creation of a 3D parametric model implies that data is organized according to the use of individual users that are involved in the building process. This means that each 3D model can be developed with different Levels of Detail/Development (LODs) basing on the goal of the data source. Along this thesis the importance of LODs is taken into account related to the kind of information filled in a BIM model. In fact, basing on the objectives of each project a BIM model can be developed in a different way to facilitate the querying data for the simulations tests.\ud The three topics were compared considering each step of the building process workflow, highlighting the main differences, evaluating the strengths and weaknesses of BIM methodology. In these terms, the importance to set a BIM template before the modelling step was pointed out, because it provides the possibility to manage information in order to be collected and extracted for different purposes and by specific users. Moreover, basing on the results obtained in terms of the 3D parametric model and in terms of process, a proper BIM maturity level was determined for each topic. Finally, the value of interoperability was arisen from these tests considering that it provided the opportunity to develop a framework for collaboration, involving all parties of the building industry

    Practice Theory Approach to Wearable Technology. Implications for Sustainability

    Get PDF

    Knowledge-based web services for context adaptation.

    Get PDF
    The need for higher value, reliable online services to promote new Internet-based business models is a requirement facing many technologists and business leaders. This need coupled with the trend towards greater mobility of networked devices and consumers creates significant challenges for current and future systems developers. The proliferation of mobile devices and the variability of their capabilities present an overwhelming number of options to systems designers and engineers who are tasked with the development of next generation context adaptive software services. Given the dynamic nature of this environment, implementing solutions for the current set of devices in the held makes an assumption that this deployment situation is somehow fixed this assumption does little to support the future and longer term needs within the marketplace. To add to the complexity, the timeframes necessary to develop robust and adaptive online software services can be long by comparison, so that the development projects and their resources are often behind on platform support before the first release is launched to the public. New approaches and methodologies for engineering dynamic and adaptive online services will be necessary and, as will be shown, are in fact mandated by the regulation imposed by service level guarantees. These new techniques and technology are commercially useless unless they can be used in engineering practice. New context adaptation processes and architectures must be capable of performing under strict service level agreements those that will undoubtedly govern future business relationships between online parties. This programme of engineering study and research investigates several key issues found in the emerging area of context adaptation services for online mobile networks. As a series of engineering investigations, the work described here involves a wider array of technical activity than found in traditional doctoral work and this is reflected throughout the dissertation. First, a clear definition of industrial motivation is stated to provide the engineering foundation. Next, the programme focuses on the nature of contextual adaptation through product development projects. The development process within these projects results in several issues with the commercial feasibility of the technology. From this point, the programme of study then progresses through the lifecycle of the engineering process, investigating at each stage the critical engineering challenges. Further analysis of the problems and possible solutions for deploying such adaptive solutions are reviewed and experiments are undertaken in the areas of systems component and performance analysis. System-wide architectural options are then evaluated with specific interest in using knowledge-base systems as one approach to solving some of the issues in context adaptation. The central hypothesis is that due to the dynamic nature of context parameters, the concept of a mobile device knowledge base as a necessary component of an architectural solution is presented and justified through prototyping efforts. The utility of web ontologies and other "soft computing" technologies on the nature of the solution are also examined through the review of relevant work and the engineering design of the demonstration system. These technology selections are supported directly by the industrial context and mission. In the final sections, the architecture is evaluated through the demonstration of promising techniques and methods in order to confirm understanding and to evaluate the use of knowledge-bases, AI and other technologies within the scope of the project. Through the implementation of a context adaptation architecture as a business process workflow, the impact of future trends of device reconfiguration are highlighted and discussed. To address the challenge of context adaptation in reconftgurable device architectures, an evolutionary computation approach is then presented as a means to provide an optimal baseline on which a service may execute. These last two techniques are discussed and new designs are proposed to specifically address the major issues uncovered in timely collection and evaluation of contextual parameters in a mobile service network. The programme summary and future work then brings together all the key results into a practitioner's reference guide for the creation of online context adaptive services with a greater degree of intelligence and maintainability while executing with the term of a service level agreement

    Wide-Area Situation Awareness based on a Secure Interconnection between Cyber-Physical Control Systems

    Get PDF
    Posteriormente, examinamos e identificamos los requisitos especiales que limitan el diseño y la operación de una arquitectura de interoperabilidad segura para los SSC (particularmente los SCCF) del smart grid. Nos enfocamos en modelar requisitos no funcionales que dan forma a esta infraestructura, siguiendo la metodología NFR para extraer requisitos esenciales, técnicas para la satisfacción de los requisitos y métricas para nuestro modelo arquitectural. Estudiamos los servicios necesarios para la interoperabilidad segura de los SSC del SG revisando en profundidad los mecanismos de seguridad, desde los servicios básicos hasta los procedimientos avanzados capaces de hacer frente a las amenazas sofisticadas contra los sistemas de control, como son los sistemas de detección, protección y respuesta ante intrusiones. Nuestro análisis se divide en diferentes áreas: prevención, consciencia y reacción, y restauración; las cuales general un modelo de seguridad robusto para la protección de los sistemas críticos. Proporcionamos el diseño para un modelo arquitectural para la interoperabilidad segura y la interconexión de los SCCF del smart grid. Este escenario contempla la interconectividad de una federación de proveedores de energía del SG, que interactúan a través de la plataforma de interoperabilidad segura para gestionar y controlar sus infraestructuras de forma cooperativa. La plataforma tiene en cuenta las características inherentes y los nuevos servicios y tecnologías que acompañan al movimiento de la Industria 4.0. Por último, presentamos una prueba de concepto de nuestro modelo arquitectural, el cual ayuda a validar el diseño propuesto a través de experimentaciones. Creamos un conjunto de casos de validación que prueban algunas de las funcionalidades principales ofrecidas por la arquitectura diseñada para la interoperabilidad segura, proporcionando información sobre su rendimiento y capacidades.Las infraestructuras críticas (IICC) modernas son vastos sistemas altamente complejos, que precisan del uso de las tecnologías de la información para gestionar, controlar y monitorizar el funcionamiento de estas infraestructuras. Debido a sus funciones esenciales, la protección y seguridad de las infraestructuras críticas y, por tanto, de sus sistemas de control, se ha convertido en una tarea prioritaria para las diversas instituciones gubernamentales y académicas a nivel mundial. La interoperabilidad de las IICC, en especial de sus sistemas de control (SSC), se convierte en una característica clave para que estos sistemas sean capaces de coordinarse y realizar tareas de control y seguridad de forma cooperativa. El objetivo de esta tesis se centra, por tanto, en proporcionar herramientas para la interoperabilidad segura de los diferentes SSC, especialmente los sistemas de control ciber-físicos (SCCF), de forma que se potencie la intercomunicación y coordinación entre ellos para crear un entorno en el que las diversas infraestructuras puedan realizar tareas de control y seguridad cooperativas, creando una plataforma de interoperabilidad segura capaz de dar servicio a diversas IICC, en un entorno de consciencia situacional (del inglés situational awareness) de alto espectro o área (wide-area). Para ello, en primer lugar, revisamos las amenazas de carácter más sofisticado que amenazan la operación de los sistemas críticos, particularmente enfocándonos en los ciberataques camuflados (del inglés stealth) que amenazan los sistemas de control de infraestructuras críticas como el smart grid. Enfocamos nuestra investigación al análisis y comprensión de este nuevo tipo de ataques que aparece contra los sistemas críticos, y a las posibles contramedidas y herramientas para mitigar los efectos de estos ataques
    corecore