327 research outputs found

    Trust and Privacy in Development of Publish/Subscribe Systems

    Get PDF
    Publish/subscribe (pub/sub) is a widely deployed paradigm for information dissemination in a variety of distributed applications such as financial platforms, e-health frameworks and the Internet-of-Things. In essence, the pub/sub model considers one or more publishers generating feeds of information and a set of subscribers, the clients of the system. A pub/sub service is in charge of delivering the published information to interested clients. With the advent of cloud computing, we observe a growing tendency to externalize applications using pub/sub services to public clouds. This trend, despite its advantages, opens up multiple important data privacy and trust issues. Although multiple solutions for data protection have been proposed by the academic community, there is no unified view or framework describing how to deploy secure pub/sub systems on public clouds. To remediate this, we advocate towards a trust model which we believe can serve as basis for such deployments

    Automated Injection of Curated Knowledge Into Real-Time Clinical Systems: CDS Architecture for the 21st Century

    Get PDF
    abstract: Clinical Decision Support (CDS) is primarily associated with alerts, reminders, order entry, rule-based invocation, diagnostic aids, and on-demand information retrieval. While valuable, these foci have been in production use for decades, and do not provide a broader, interoperable means of plugging structured clinical knowledge into live electronic health record (EHR) ecosystems for purposes of orchestrating the user experiences of patients and clinicians. To date, the gap between knowledge representation and user-facing EHR integration has been considered an “implementation concern” requiring unscalable manual human efforts and governance coordination. Drafting a questionnaire engineered to meet the specifications of the HL7 CDS Knowledge Artifact specification, for example, carries no reasonable expectation that it may be imported and deployed into a live system without significant burdens. Dramatic reduction of the time and effort gap in the research and application cycle could be revolutionary. Doing so, however, requires both a floor-to-ceiling precoordination of functional boundaries in the knowledge management lifecycle, as well as formalization of the human processes by which this occurs. This research introduces ARTAKA: Architecture for Real-Time Application of Knowledge Artifacts, as a concrete floor-to-ceiling technological blueprint for both provider heath IT (HIT) and vendor organizations to incrementally introduce value into existing systems dynamically. This is made possible by service-ization of curated knowledge artifacts, then injected into a highly scalable backend infrastructure by automated orchestration through public marketplaces. Supplementary examples of client app integration are also provided. Compilation of knowledge into platform-specific form has been left flexible, in so far as implementations comply with ARTAKA’s Context Event Service (CES) communication and Health Services Platform (HSP) Marketplace service packaging standards. Towards the goal of interoperable human processes, ARTAKA’s treatment of knowledge artifacts as a specialized form of software allows knowledge engineers to operate as a type of software engineering practice. Thus, nearly a century of software development processes, tools, policies, and lessons offer immediate benefit: in some cases, with remarkable parity. Analyses of experimentation is provided with guidelines in how choice aspects of software development life cycles (SDLCs) apply to knowledge artifact development in an ARTAKA environment. Portions of this culminating document have been further initiated with Standards Developing Organizations (SDOs) intended to ultimately produce normative standards, as have active relationships with other bodies.Dissertation/ThesisDoctoral Dissertation Biomedical Informatics 201

    Real-world Machine Learning Systems: A survey from a Data-Oriented Architecture Perspective

    Full text link
    Machine Learning models are being deployed as parts of real-world systems with the upsurge of interest in artificial intelligence. The design, implementation, and maintenance of such systems are challenged by real-world environments that produce larger amounts of heterogeneous data and users requiring increasingly faster responses with efficient resource consumption. These requirements push prevalent software architectures to the limit when deploying ML-based systems. Data-oriented Architecture (DOA) is an emerging concept that equips systems better for integrating ML models. DOA extends current architectures to create data-driven, loosely coupled, decentralised, open systems. Even though papers on deployed ML-based systems do not mention DOA, their authors made design decisions that implicitly follow DOA. The reasons why, how, and the extent to which DOA is adopted in these systems are unclear. Implicit design decisions limit the practitioners' knowledge of DOA to design ML-based systems in the real world. This paper answers these questions by surveying real-world deployments of ML-based systems. The survey shows the design decisions of the systems and the requirements these satisfy. Based on the survey findings, we also formulate practical advice to facilitate the deployment of ML-based systems. Finally, we outline open challenges to deploying DOA-based systems that integrate ML models.Comment: Under revie

    Arizona Health Information Exchange

    Get PDF
    abstract: Arizona strives to be the national role model for the secure, interoperable health information exchange to facilitate safe, secure, high quality and cost effective health care. The purpose of the Health Information Exchange in Arizona is to improve the quality, safety and efficiency of wellness in the Arizona population by securely connecting patients and health care providers so that relevant and understandable information is available anytime, anywhere

    Town of Mount Desert 2011 Annual Town Report

    Get PDF

    Where can teens find health information? A survey of web portals designed for teen health information seekers

    Get PDF
    The Web is an important source for health information for most teens with access to the Web (Gray et al, 2005a; Kaiser, 2001). While teens are likely to turn to the Web for health information, research has indicated that their skills in locating, evaluating and using health information are weak (Hansen et al, 2003; Skinner et al, 2003, Gray et al, 2005b). This behaviour suggests that the targeted approach to finding health information that is offered by web portals would be useful to teens. A web portal is the entry point for information on the Web. It is the front end, and often the filter, that users must pass through in order to link to actual content. Unlike general search engines such as Google, content that is linked to a portal has usually been pre-selected and even created by the organization that hosts the portal, assuring some level of quality control. The underlying architecture of the portal is structured and thus offers an organized approach to exploring a specific health topic. This paper reports on an environmental scan of the Web, the purpose of which was to identify and describe portals to general health information, in English and French, designed specifically for teens. It answers two key questions. First of all, what portals exist? And secondly, what are their characteristics? The portals were analyzed through the lens of four attributes: Usability, interactivity, reliability and findability. Usability is a term that incorporates concepts of navigation, layout and design, clarity of concept and purpose, underlying architecture, in-site assistance and, for web content with text, readability. Interactivity relates to the type of interactions and level of engagement required by the user to access health information on a portal. Interaction can come in the form of a game, a quiz, a creative experience, or a communication tool such as an instant messaging board, a forum or blog. Reliability reflects the traditional values of accuracy, currency, credibility and bias, and in the web-based world, durabililty. Findability is simply the ease with which a portal can be discovered by a searcher using the search engine that is most commonly associated with the Web by young people - Google - and using terms related to teen health. Findability is an important consideration since the majority of teens begin their search for health information using search engines (CIBER, 2008; Hansen et al, 2003). The content linked to by the portals was not evaluated, nor was the portals’ efficacy as a health intervention. Teens looking for health information on the Web in English have a wide range of choices available but French-language portals are much rarer and harder to find. A majority of the portals found and reviewed originated from hospitals, associations specializing in a particular disease, and governmental agencies, suggesting that portals for teens on health related topics are generally reliable. However, only a handful of the portals reviewed were easy to find, suggesting that valuable resources for teens remain buried in the Web

    New Waves of IoT Technologies Research – Transcending Intelligence and Senses at the Edge to Create Multi Experience Environments

    Get PDF
    The next wave of Internet of Things (IoT) and Industrial Internet of Things (IIoT) brings new technological developments that incorporate radical advances in Artificial Intelligence (AI), edge computing processing, new sensing capabilities, more security protection and autonomous functions accelerating progress towards the ability for IoT systems to self-develop, self-maintain and self-optimise. The emergence of hyper autonomous IoT applications with enhanced sensing, distributed intelligence, edge processing and connectivity, combined with human augmentation, has the potential to power the transformation and optimisation of industrial sectors and to change the innovation landscape. This chapter is reviewing the most recent advances in the next wave of the IoT by looking not only at the technology enabling the IoT but also at the platforms and smart data aspects that will bring intelligence, sustainability, dependability, autonomy, and will support human-centric solutions.acceptedVersio

    Design and implementation of a telemetry platform for high-performance computing environments

    Get PDF
    A new generation of high-performance and distributed computing applications and services rely on adaptive and dynamic architectures and execution strategies to run efficiently, resiliently, and at scale in today’s HPC environments. These architectures require insights into their execution behaviour and the state of their execution environment at various levels of detail, in order to make context-aware decisions. HPC telemetry provides this information. It describes the continuous stream of time series and event data that is generated on HPC systems by the hardware, operating systems, services, runtime systems, and applications. Current HPC ecosystems do not provide the conceptual models, infrastructure, and interfaces to collect, store, analyse, and integrate telemetry in a structured and efficient way. Consequently, applications and services largely depend on one-off solutions and custom-built technologies to achieve these goals; introducing significant development overheads that inhibit portability and mobility. To facilitate a broader mix of applications, more efficient application development, and swift adoption of adaptive architectures in production, a comprehensive framework for telemetry management and analysis must be provided as part of future HPC ecosystem designs. This thesis provides the blueprint for such a framework: it proposes a new approach to telemetry management in HPC: the Telemetry Platform concept. Departing from the observation that telemetry data and the corresponding analysis, and integration pat- terns on modern multi-tenant HPC systems have a lot of similarities to the patterns observed in large-scale data analytics or “Big Data” platforms, the telemetry platform concept takes the data platform paradigm and architectural approach and applies them to HPC telemetry. The result is the blueprint for a system that provides services for storing, searching, analysing, and integrating telemetry data in HPC applications and other HPC system services. It allows users to create and share telemetry data-driven insights using everything from simple time-series analysis to complex statistical and machine learning models while at the same time hiding many of the inherent complexities of data management such as data transport, clean-up, storage, cataloguing, access management, and providing appropriate and scalable analytics and integration capabilities. The main contributions of this research are (1) the application of the data platform concept to HPC telemetry data management and usage; (2) a graph-based, time-variant telemetry data model that captures structures and properties of platform and applications and in which telemetry data can be organized; (3) an architecture blueprint and prototype of a concrete implementation and integration architecture of the telemetry platform; and (4) a proposal for decoupled HPC application architectures, separating telemetry data management, and feedback-control-loop logic from the core application code. First experimental results with the prototype implementation suggest that the telemetry platform paradigm can reduce overhead and redundancy in the development of telemetry-based application architectures, and lower the barrier for HPC systems research and the provisioning of new, innovative HPC system services
    • 

    corecore