71 research outputs found

    A dynamic default revision mechanism for speculative computation

    Get PDF
    In this work a default revision mechanism is introduced into Speculative Computation to manage incomplete information. The default revision is supported by a method for the generation of default constraints based on Bayesian Networks. The method enables the generation of an initial set of defaults which is used to produce the most likely scenarios during the computation, represented by active processes. As facts arrive, the Bayesian Network is used to derive new defaults. The objective with such a new dynamic mechanism is to keep the active processes coherent with arrived facts. This is achieved by changing the initial set of default constraints during the reasoning process in Speculative Computation. A practical example in clinical decision support is described.info:eu-repo/semantics/publishedVersio

    Clinical decision support: Knowledge representation and uncertainty management

    Get PDF
    Programa Doutoral em Engenharia BiomédicaDecision-making in clinical practice is faced with many challenges due to the inherent risks of being a health care professional. From medical error to undesired variations in clinical practice, the mitigation of these issues seems to be tightly connected to the adherence to Clinical Practice Guidelines as evidence-based recommendations The deployment of Clinical Practice Guidelines in computational systems for clinical decision support has the potential to positively impact health care. However, current approaches to Computer-Interpretable Guidelines evidence a set of issues that leave them wanting. These issues are related with the lack of expressiveness of their underlying models, the complexity of knowledge acquisition with their tools, the absence of support to the clinical decision making process, and the style of communication of Clinical Decision Support Systems implementing Computer-Interpretable Guidelines. Such issues pose as obstacles that prevent these systems from showing properties like modularity, flexibility, adaptability, and interactivity. All these properties reflect the concept of living guidelines. The purpose of this doctoral thesis is, thus, to provide a framework that enables the expression of these properties. The modularity property is conferred by the ontological definition of Computer-Interpretable Guidelines and the assistance in guideline acquisition provided by an editing tool, allowing for the management of multiple knowledge patterns that can be reused. Flexibility is provided by the representation primitives defined in the ontology, meaning that the model is adjustable to guidelines from different categories and specialities. On to adaptability, this property is conferred by mechanisms of Speculative Computation, which allow the Decision Support System to not only reason with incomplete information but to adapt to changes of state, such as suddenly knowing the missing information. The solution proposed for interactivity consists in embedding Computer-Interpretable Guideline advice directly into the daily life of health care professionals and provide a set of reminders and notifications that help them to keep track of their tasks and responsibilities. All these solutions make the CompGuide framework for the expression of Clinical Decision Support Systems based on Computer-Interpretable Guidelines.A tomada de decisão na prática clínica enfrenta inúmeros desafios devido aos riscos inerentes a ser um profissional de saúde. Desde o erro medico até às variações indesejadas da prática clínica, a atenuação destes problemas parece estar intimamente ligada à adesão a Protocolos Clínicos, uma vez que estes são recomendações baseadas na evidencia. A operacionalização de Protocolos Clínicos em sistemas computacionais para apoio à decisão clínica apresenta o potencial de ter um impacto positivo nos cuidados de saúde. Contudo, as abordagens atuais a Protocolos Clínicos Interpretáveis por Maquinas evidenciam um conjunto de problemas que as deixa a desejar. Estes problemas estão relacionados com a falta de expressividade dos modelos que lhes estão subjacentes, a complexidade da aquisição de conhecimento utilizando as suas ferramentas, a ausência de suporte ao processo de decisão clínica e o estilo de comunicação dos Sistemas de Apoio à Decisão Clínica que implementam Protocolos Clínicos Interpretáveis por Maquinas. Tais problemas constituem obstáculos que impedem estes sistemas de apresentarem propriedades como modularidade, flexibilidade, adaptabilidade e interatividade. Todas estas propriedades refletem o conceito de living guidelines. O propósito desta tese de doutoramento é, portanto, o de fornecer uma estrutura que possibilite a expressão destas propriedades. A modularidade é conferida pela definição ontológica dos Protocolos Clínicos Interpretáveis por Maquinas e pela assistência na aquisição de protocolos fornecida por uma ferramenta de edição, permitindo assim a gestão de múltiplos padrões de conhecimento que podem ser reutilizados. A flexibilidade é atribuída pelas primitivas de representação definidas na ontologia, o que significa que o modelo é ajustável a protocolos de diferentes categorias e especialidades. Quanto à adaptabilidade, esta é conferida por mecanismos de Computação Especulativa que permitem ao Sistema de Apoio à Decisão não só raciocinar com informação incompleta, mas também adaptar-se a mudanças de estado, como subitamente tomar conhecimento da informação em falta. A solução proposta para a interatividade consiste em incorporar as recomendações dos Protocolos Clínicos Interpretáveis por Maquinas diretamente no dia a dia dos profissionais de saúde e fornecer um conjunto de lembretes e notificações que os auxiliam a rastrear as suas tarefas e responsabilidades. Todas estas soluções constituem a estrutura CompGuide para a expressão de Sistemas de Apoio à Decisão Clínica baseados em Protocolos Clínicos Interpretáveis por Máquinas.The work of the PhD candidate Tiago José Martins Oliveira is supported by a grant from FCT - Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology) with the reference SFRH/BD/85291/ 2012

    Qualitative Process Analysis : Theoretical Requirements and Practical Implementation in Naval Domain

    Get PDF
    Understanding complex behaviours is an essential component of everyday life, integrated into daily routines as well as specialised research. To handle the increasing amount of data available from (logistic) dynamic scenarios, analysis of the behaviour of agents in a given environment is becoming more automated and thus requires reliable new analytical methods. This thesis seeks to improve analysis of observed data in dynamic scenarios by developing a new model for transforming sparse behavioural observations into realistic explanations of agent behaviours, with the goal of testing that model in a real-world maritime navigation scenario

    Engineering multiuser museum interactives for shared cultural experiences

    Get PDF
    Multiuser museum interactives are computer systems installed in museums or galleries which allow several visitors to interact together with digital representations of artefacts and information from the museum׳s collection. In this paper, we describe WeCurate, a socio-technical system that supports co-browsing across multiple devices and enables groups of users to collaboratively curate a collection of images, through negotiation, collective decision making and voting. The engineering of such a system is challenging since it requires to address several problems such as: distributed workflow control, collective decision making and multiuser synchronous interactions. The system uses a peer-to-peer Electronic Institution (EI) to manage and execute a distributed curation workflow and models community interactions into scenes, where users engage in different social activities. Social interactions are enacted by intelligent agents that interface the users participating in the curation workflow with the EI infrastructure. The multiagent system supports collective decision making, representing the actions of the users within the EI, where the agents advocate and support the desires of their users e.g. aggregating opinions for deciding which images are interesting enough to be discussed, and proposing interactions and resolutions between disagreeing group members. Throughout the paper, we describe the enabling technologies of WeCurate, the peer-to-peer EI infrastructure, the agent collective decision making capabilities and the multi-modal interface. We present a system evaluation based on data collected from cultural exhibitions in which WeCurate was used as supporting multiuser interactive

    Declarative techniques for modeling and mining business processes..

    Get PDF
    Organisaties worden vandaag de dag geconfronteerd met een schijnbare tegenstelling. Hoewel ze aan de ene kant veel geld geïnvesteerd hebben in informatiesystemen die hun bedrijfsprocessen automatiseren, lijken ze hierdoor minder in staat om een goed inzicht te krijgen in het verloop van deze processen. Een gebrekkig inzicht in de bedrijfsprocessen bedreigt hun flexibiliteit en conformiteit. Flexibiliteit is belangrijk, omdat organisaties door continu wijzigende marktomstandigheden gedwongen worden hun bedrijfsprocessen snel en soepel aan te passen. Daarnaast moeten organisaties ook kunnen garanderen dan hun bedrijfsvoering conform is aan de wetten, richtlijnen, en normen die hun opgelegd worden. Schandalen zoals de recent aan het licht gekomen fraude bij de Franse bank Société Générale toont het belang aan van conformiteit en flexibiliteit. Door het afleveren van valse bewijsstukken en het omzeilen van vaste controlemomenten, kon één effectenhandelaar een risicoloze arbitragehandel op prijsverschillen in futures omtoveren tot een risicovolle, speculatieve handel in deze financiële derivaten. De niet-ingedekte, niet-geautoriseerde posities bleven lange tijd verborgen door een gebrekkige interne controle, en tekortkomingen in de IT beveiliging en toegangscontrole. Om deze fraude in de toekomst te voorkomen, is het in de eerste plaats noodzakelijk om inzicht te verkrijgen in de operationele processen van de bank en de hieraan gerelateerde controleprocessen. In deze tekst behandelen we twee benaderingen die gebruikt kunnen worden om het inzicht in de bedrijfsprocessen te verhogen: procesmodellering en procesontginning. In het onderzoek is getracht technieken te ontwikkelen voor procesmodellering en procesontginning die declaratief zijn. Procesmodellering process modeling is de manuele constructie van een formeel model dat een relevant aspect van een bedrijfsproces beschrijft op basis van informatie die grotendeels verworven is uit interviews. Procesmodellen moeten adequate informatie te verschaffen over de bedrijfsprocessen om zinvol te kunnen worden gebruikt bij hun ontwerp, implementatie, uitvoering, en analyse. De uitdaging bestaat erin om nieuwe talen voor procesmodellering te ontwikkelen die adequate informatie verschaffen om deze doelstelling realiseren. Declaratieve procestalen maken de informatie omtrent bedrijfsbekommernissen expliciet. We karakteriseren en motiveren declaratieve procestalen, en nemen we een aantal bestaande technieken onder de loep. Voorts introduceren we een veralgemenend raamwerk voor declaratieve procesmodellering waarbinnen bestaande procestalen gepositioneerd kunnen worden. Dit raamwerk heet het EM-BrA�CE raamwerk, en staat voor `Enterprise Modeling using Business Rules, Agents, Activities, Concepts and Events'. Het bestaat uit een formele ontolgie en een formeel uitvoeringsmodel. Dit raamwerk legt de ontologische basis voor de talen en technieken die verder in het doctoraat ontwikkeld worden. Procesontginning process mining is de automatische constructie van een procesmodel op basis van de zogenaamde event logs uit informatiesystemen. Vandaag de dag worden heel wat processen door informatiesystemen in event logs geregistreerd. In event logs vindt men in chronologische volgorde terug wie, wanneer, welke activiteit verricht heeft. De analyse van event logs kan een accuraat beeld opleveren van wat er zich in werkelijkheid afspeelt binnen een organisatie. Om bruikbaar te zijn, moeten de ontgonnen procesmodellen voldoen aan criteria zoals accuraatheid, verstaanbaarheid, en justifieerbaarheid. Bestaande technieken voor procesontginning focussen vooral op het eerste criterium: accuraatheid. Declaratieve technieken voor procesontginning richten zich ook op de verstaanbaarheid en justifieerbaarheid van de ontgonnen modellen. Declaratieve technieken voor procesontginning zijn meer verstaanbaar omdat ze pogen procesmodellen voor te stellen aan de hand van declaratieve voorstellingsvormen. Daarenboven verhogen declaratieve technieken de justifieerbaarheid van de ontgonnen modellen. Dit komt omdat deze technieken toelaten de apriori kennis, inductieve bias, en taal bias van een leeralgoritme in te stellen. Inductief logisch programmeren (ILP) is een leertechniek die inherent declaratief is. In de tekst tonen we hoe proces mining voorgesteld kan worden als een ILP classificatieprobleem, dat de logische voorwaarden leert waaronder gebeurtenis plaats vindt (positief event) of niet plaatsvindt (een negatief event). Vele event logs bevatten van nature geen negatieve events die aangeven dat een bepaalde activiteit niet kon plaatsvinden. Om aan dit probleem tegemoet te komen, beschrijven we een techniek om artificiële negatieve events te genereren, genaamd AGNEs (process discovery by Artificially Generated Negative Events). De generatie van artificiële negatieve events komt neer op een configureerbare inductieve bias. De AGNEs techniek is geïmplementeerd als een mining plugin in het ProM raamwerk. Door process discovery voor te stellen als een eerste-orde classificatieprobleem op event logs met artificiële negatieve events, kunnen de traditionele metrieken voor het kwantificeren van precisie (precision) en volledigheid (recall) toegepast worden voor het kwantificeren van de precisie en volledigheid van een procesmodel ten opzicht van een event log. In de tekst stellen we twee nieuwe metrieken voor. Deze nieuwe metrieken, in combinatie met bestaande metrieken, werden gebruikt voor een uitgebreide evaluatie van de AGNEs techniek voor process discovery in zowel een experimentele als een praktijkopstelling.

    Representation and Extension in Consciousness Studies

    Get PDF
    Various theories suggest conscious phenomena are based exclusively on brain activity, while others regard them as a result of the interaction between embodied agents and their environment. In this paper, I will consider whether this divergence entails the acceptance of the fact that different theories can be applied in different scales (as in the case of physics), or if they are reconcilable. I will suggest that investigating how the term representation is used can reveal some hints, building upon which we can bridge the gulf between the two poles in the long run. In my argumentation I will rely on some earlier philosophical insights, such as those of Descartes, James, Wittgenstein and Merleau-Ponty, as well as research based on global workspace theory, and the conceptions of embodied and enacted cognition. I will suggest that within a wider horizon of investigation, the ambiguity as regards the term representation will decrease

    Meaning naturally--a partial defense of covariation semantics

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Linguistics and Philosophy, 1990.Includes bibliographical references (p. 434-443).by Paul Michael Pietroski.Ph.D

    An Algorithmic Theory of the Policy Process

    Get PDF
    With a few exceptions, current theories of the policy process do not model or measure the policy process using the graphical process notations that are common within information science, business administration and many natural sciences. The reason is that in the post-war period the needs of business process analysis came to dominate social science applications of process science whilst the needs of public policy process analysis remained largely unaddressed. As a result, modern graphical process notations can encode and quantify the instrumental properties of cost and efficiency of a business process, but not the normative properties of transparency, accountability or legitimacy of the much more complex policy making process. There have been many other unfortunate consequences. Business process modelling evolved into business process reengineering and became a critical enabler of a period of unprecedented hyper-globalization commencing in the 1990’s. However, it did so by encoding and quantifying the instrumental dimensions of cost and efficiency of globalized production processes and not their normative dimensions of domestic employment and social welfare transfers. We live with the consequences to this day of the emergence of destabilizing populist national movements and rising security and defense tensions between former trading partners. However, in recent years, there have been several important new developments. Firstly, a new class of process modelling tools has emerged at the juncture of the disciplines of information science and business administration that can model much more complex governance and policy-making processes as rules based declarative process graphs instead of sequence based imperative process graphs. Secondly, information science is now introducing a capacity for normative reasoning and moral dilemma resolution into a range of technologies from multi-agent systems and artificial societies to self-driving vehicles and autonomous battle drones. This creates new opportunities for a collaboration between policy process analysis and information science to reengineer legacy policy making processes and organizations in terms of normatively driven declarative processes. Not only must these reengineered policy making processes score better against instrumental criteria of cost and efficiency but also against the normative criteria of transparency, accountability, and legitimacy. Consequently, the metrics presented in this dissertation re-connect public policy process analysis with the tools and results of decades of process research in the fields of information science, business administration and many natural sciences, and supports a new theory of the public policy process as an algorithm whose purpose is the generation of solutions to public goods allocation problems. To illustrate the principles of the techniques involved and the utility of the approach, a case study analysis and prediction of Chinese public health policy response to the COVID-19 pandemic of 2020/21 is presented

    Intelligent Sensor Networks

    Get PDF
    In the last decade, wireless or wired sensor networks have attracted much attention. However, most designs target general sensor network issues including protocol stack (routing, MAC, etc.) and security issues. This book focuses on the close integration of sensing, networking, and smart signal processing via machine learning. Based on their world-class research, the authors present the fundamentals of intelligent sensor networks. They cover sensing and sampling, distributed signal processing, and intelligent signal learning. In addition, they present cutting-edge research results from leading experts

    Anales del XIII Congreso Argentino de Ciencias de la Computación (CACIC)

    Get PDF
    Contenido: Arquitecturas de computadoras Sistemas embebidos Arquitecturas orientadas a servicios (SOA) Redes de comunicaciones Redes heterogéneas Redes de Avanzada Redes inalámbricas Redes móviles Redes activas Administración y monitoreo de redes y servicios Calidad de Servicio (QoS, SLAs) Seguridad informática y autenticación, privacidad Infraestructura para firma digital y certificados digitales Análisis y detección de vulnerabilidades Sistemas operativos Sistemas P2P Middleware Infraestructura para grid Servicios de integración (Web Services o .Net)Red de Universidades con Carreras en Informática (RedUNCI
    corecore