19 research outputs found

    Développement d'une méthodologie afin d'intégrer et de valoriser l'information spatio-temporelle sur la qualité de l'eau à l'échelle d'un bassin versant : un exemple d'application à la protection des sources d'eau potable

    Get PDF
    La gestion intégrée des ressources en eau par bassin versant requiert de bonnes connaissances du territoire à l'étude afin d'inclure le suivi de la qualité de l'eau de la « source au robinet » dans la planification territoriale. Un accès efficace à tout type de données relatives à la qualité de l'eau, incluant l'occupation du sol et les risques (impacts anthropiques, environnementaux, etc.), permettrait aux intervenants de divers milieux de prendre en considération le portrait complet des informations disponibles pour une prise de décision éclairée. Inventorier, assembler, analyser et interpréter les données peut représenter un défi supplémentaire lorsque les informations à colliger proviennent de sources différentes, tout particulièrement pour les organismes de bassins versants (OBV) et les petites municipalités qui n'ont pas toujours les moyens d'avoir des spécialistes en gestion de bases de données et en systèmes d'information géographique (SIG). Ce projet de maîtrise porte sur le développement d'un cadre méthodologique concernant l'acquisition, la gestion et l'utilisation des données relatives à la qualité de l'eau dans les bassins versants où existent des prises d'eau potable. Après une étude approfondie des exigences gouvernementales concernant la protection des sources, il sera question d'optimiser la gestion des données en concordance avec le Règlement sur le prélèvement des eaux et leur protection (RPEP) du Québec. Dans un contexte où de nombreuses municipalités du Québec doivent répondre aux exigences du RPEP d'ici 2021, un logiciel spécialisé en gestion des données sur la qualité de l'eau a été adapté afin d'aider les gestionnaires de la ressource en eau qui ont le mandat de répondre au RPEP. Ce projet permettra d'optimiser le processus d'acquisition et de traitement de l'information requise (eau et territoire) pour l'évaluation de la vulnérabilité des prises d'eau potable. L'évaluation de la vulnérabilité des prises d'eau potable est une démarche fondamentale de la protection des sources d'eau et de l'harmonisation de cette dernière avec l'occupation du territoire. La méthodologie développée pour une étude de cas dans les Laurentides montréalaises est applicable à d'autres bassins versants du Québec et d'ailleurs

    Quality of Information as an indicator of Trust in the Internet of Things

    Get PDF
    The past decade has seen a rise in complexity and scale of software systems, particularly with the emerging of the Internet of Thing consisting of large scale and heterogeneous entities which results in difficulties in providing trustworthy services. To overcome such challenges, providing high quality information for IoT service provider as well as monitoring trust relationship of end-users toward the services are paramount. Such trust relationships are user-oriented and subjective phenomenon that hook on specific quality of data. Following this catalyst, we propose a mechanism to evaluate quality of information (QoI) for streaming data from sensor device; then use the QoI evaluation score as an indicator of trust. Concepts and assessment methodology for QoI along with a trust monitoring system are described. We also develop a framework that classifies streaming of data based on semantic context and generate QoI score as a relevant input for a trust monitoring component. This framework enables a dynamic trust management in the context of IoT for both end-users and services that empowers service providers to deliver trustworthy and high quality IoT services. Challenges encountered during implementation and contribution in standardization are discussed. We believe this paper offers better understanding on QoI and its importance in trust evaluation in IoT applications; also provides detailed implementation of the QoI and Trust components for real-world applications and services

    Event-Driven Duplicate Detection: A Probability-based Approach

    Get PDF
    The importance of probability-based approaches for duplicate detection has been recognized in both research and practice. However, existing approaches do not aim to consider the underlying real-world events resulting in duplicates (e.g., that a relocation may lead to the storage of two records for the same customer, once before and after the relocation). Duplicates resulting from real-world events exhibit specific characteristics. For instance, duplicates resulting from relocations tend to have significantly different attribute values for all address-related attributes. Hence, existing approaches focusing on high similarity with respect to attribute values are hardly able to identify possible duplicates resulting from such real-world events. To address this issue, we propose an approach for event-driven duplicate detection based on probability theory. Our approach assigns the probability of being a duplicate resulting from real-world events to each analysed pair of records while avoiding limiting assumptions (of existing approaches). We demonstrate the practical applicability and effectiveness of our approach in a real-world setting by analysing customer master data of a German insurer. The evaluation shows that the results provided by the approach are reliable and useful for decision support and can outperform well-known state-of-the-art approaches for duplicate detection

    Estado do conhecimento sobre sobrecarga de informação na tomada de decisão em artigos científicos no período entre 2007 e 2016

    Get PDF
    Orientador : Romualdo Douglas ColautoMonografia (especialização) - Universidade Federal do Paraná, Setor de Ciências Sociais Aplicadas, Curso de Especialização em ControladoriaInclui referênciasResumo : O volume informacional desempenha um papel poderoso na tomada de decisão e a velocidade com que as informações são produzidas e disseminadas pode estar diretamente relacionada com a sobrecarga de informação. O estudo busca descrever qual é o estado do conhecimento sobre sobrecarga de informação na tomada de decisão em artigos científicos no período entre 2007 a 2016. Os dados analisados foram obtidos das bases de dados ScienceDirect e EBSCOHost. Para tal, foi utilizada a metodologia multicritério para composição de um portfólio bibliográfico denominada Methodi Ordinatio desenvolvida por Pagani, Kovaleski e Resende (2015) com o auxílio dos softwares Zotero e JabRef-3.3. Caracterizada como quantitativa; do ponto de vista dos objetivos, classifica-se como exploratória; e quanto aos procedimentos técnicos, caracteriza-se como bibliográfica. Os resultados demonstram que o número de publicações a respeito do tema ainda é incipiente e se apresentam de forma dispersa. Os achados indicam também que o maior número de publicações encontra-se na área de Sistemas de Suporte à Decisão (periódico Decision Support Systems, apresentando um percentual de 10%), o segundo maior número de publicações encontra-se na área da Psicologia (Journal Experimental Psychology, 6,57%) e as demais publiações com 3,09%, cada. Conclui-se que os resultados desta pesquisa podem ser utilizados por pesquisadores que objetivem pesquisar sobre o assunto, ao proporcionar a otimização de tempo na busca pelas principais publicações e autores sobre o tema. Portanto, observa-se a relevância que uma pesquisa sobre o "estado do conhecimento" pode ter no movimento ininterrupto da ciência ao longo do tempo

    Sobrecarga de informação na tomada de decisão : percepção de decisões em ambiente organizacional

    Get PDF
    Orientador: Prof. Dr. Edelvino Razzolini FilhoDissertação (mestrado) - Universidade Federal do Paraná, Setor de Ciências Sociais e Aplicadas, Programa de Pós-Graduação em Gestão da Informação. Defesa : Curitiba, 11/02/2019Inclui referências: p. 56-64Resumo: Para compensar situações de mudança rápida e irregular ou um contexto carregado de novidade, o indivíduo tem de processar muito mais informações do que antes para tomar decisões efetivas e racionais. Visando este fato, a dissertação tem como objetivo analisar como estudantes de cursos lato sensu do Setor de Ciências Sociais Aplicadas da Universidade Federal do Paraná, com vivência no mercado de trabalho, percebem se a sobrecarga de informação afeta seus processos de tomada de decisão em 2018. A discussão aqui apresentada é feita à luz do argumento de racionalidade limitada (bounded rationality) proposto por Herbert Simon que sugere que a hipótese relativa à utilização de todas as informações relevantes não pode ser verdade se os tomadores de decisão enfrentam condições de escolha que a excedem. O método utilizado para a realização da pesquisa foi o levantamento (survey) com uma abordagem quantitativa que é frequentemente aplicada nos estudos descritivos que procuram descobrir e classificar a relação entre variáveis, como é o caso. As evidências empíricas indicam que a sobrecarga de informação não afeta a tomada de decisão no grupo de sujeitos da amostra. Para a análise de todos os itens, foram elaborados e validados estatisticamente dois questionários que medem sobrecarga de informação e sobrecarga de informação na tomada de decisão, respectivamente. Considerandose que a academia e a cultura confirmem a sobrecarga de informação como um conceito cultural reconhecido e ressonante e que persiste mesmo sem corroboração sólida, conclui-se que os resultados aqui apresentados contribuem ao apresentarem e disponibilizarem dois instrumentos validados estatisticamente para serem aplicados empiricamente. Com isso, se fortalece teoricamente os estudos sobre sobrecarga de informação. Palavras-chave: Estudos quantitativos. Instrumentos psicométricos. Validade estatística. Escala.Abstract: To compensate for situations of rapid and irregular change or a context loaded with novelty, the individual has to process much more information than before to make effective and rational decisions. Aiming at this fact, the dissertation aims to analyze how students from the lato sensu courses of the Sector of Applied Social Sciences of the Federal University of Paraná, with experience in the labor market, perceive if the information overload affects their decision making processes in 2018 The discussion here is made in the light of the bounded rationality argument proposed by Herbert Simon that suggests that the hypothesis regarding the use of all relevant information cannot be true if decision makers face conditions of choice that the exceed. The method used to conduct the research was the survey with a quantitative approach that is often applied in descriptive studies that seek to discover and classify the relationship between variables, as is the case. Empirical evidence indicates that information overload does not affect decision making in the group of subjects in the sample. For the analysis of all items, two questionnaires measuring information overload and information overload in decision making, respectively, were statistically elaborated and validated. Considering that academia and culture confirm information overload as a recognized and resonant cultural concept and that it persists even without solid corroboration, it is concluded that the results presented here contribute to the presentation and availability of two statistically validated instruments to be applied empirically . With this, it theoretically strengthens the studies on information overload. Keywords: Quan titative studies. Psychometric instruments. Statistical validity. Scale

    Determining the use of data quality metadata (DQM) for decision making purposes and its impact on decision outcomes — an exploratory study

    No full text
    Decision making processes and their outcomes can be affected by a number of factors. Among them, the quality of the data is critical. Poor quality data cause poor decisions. Although this fact is widely known, data quality (DQ) is still a critical issue in organizations because of the huge data volumes available in their systems. Therefore, literature suggests that communicating the DQ level of a specific data set to decision makers in the form of DQ metadata (DQM) is essential. However, the presence of DQM may overload or demand cognitive resources beyond decision makers' capacities, which can adversely impact the decision outcomes. To address this issue, we have conducted an experiment to explore the impact of DQM on decision outcomes, to identify different groups of decision makers who benefit from DQM and to explore different factors which enhance or otherwise hinder the use of DQM. Findings of a statistical analysis suggest that the use of DQM can be enhanced by data quality training or education. Decision makers with a certain level of data quality awareness used DQM more to solve a decision task than those with no data quality awareness. Moreover, those with data quality awareness reached a higher decision accuracy. However, the efficiency of decision makers suffers when DQM is used. Our suggestion would be that DQM can have a positive impact on decision outcomes if it is associated with some characteristics of decision makers, such as a high data quality knowledge. However, the results do not confirm that DQM should be included in data warehouses as a general business practice, instead organizations should first investigate the use and impact of DQM in their setting before maintaining DQM in data warehouses

    Event-Driven Duplicate Detection: A probability based Approach

    Get PDF
    The importance of probability-based approaches for duplicate detection has been recognized in both research and practice. However, existing approaches do not aim to consider the underlying real-world events resulting in duplicates (e.g., that a relocation may lead to the storage of two records for the same customer, once before and after the relocation). Duplicates resulting from real-world events exhibit specific characteristics. For instance, duplicates resulting from relocations tend to have significantly different attribute values for all address-related attributes. Hence, existing approaches focusing on high similarity with respect to attribute values are hardly able to identify possible duplicates resulting from such real-world events. To address this issue, we propose an approach for event-driven duplicate de-tection based on probability theory. Our approach assigns the probability of being a duplicate resulting from real-world events to each analysed pair of records while avoiding limiting assumptions (of existing approaches). We demonstrate the practical applicability and effectiveness of our approach in a real-world setting by analysing customer master data of a German insurer. The evaluation shows that the results provided by the approach are reliable and useful for decision support and can outperform well-known state-of-the-art approaches for duplicate detection

    Assessing Data Quality - A Probability-based Metric for Semantic Consistency

    Get PDF
    We present a probability-based metric for semantic consistency using a set of uncertain rules. As opposed to existing metrics for semantic consistency, our metric allows to consider rules that are expected to be fulfilled with specific probabilities. The resulting metric values represent the probability that the assessed dataset is free of internal contradictions with regard to the uncertain rules and thus have a clear interpretation. The theoretical basis for determining the metric values are statistical tests and the concept of the p-value, allowing the interpretation of the metric value as a probability. We demonstrate the practical applicability and effectiveness of the metric in a real-world setting by analyzing a customer dataset of an insurance company. Here, the metric was applied to identify semantic consistency problems in the data and to support decision-making, for instance, when offering individual products to customers

    Ennustavan data-analytiikan hyödyntäminen liiketoimintapäätöksissä

    Get PDF
    Tässä tutkielmassa tutkitaan data-analytiikan ja etenkin ennustavan data-analytiikan käyttöä yritysten liiketoimintapäätösten tukena. Tutkielman tavoitteena on selvittää suomalaisen vähittäiskaupan datankäytön nykytila ja siihen liittyvät haasteet. Toisaalta tavoitteena on myös kartoittaa, käytetäänkö data-analytiikkaa vähittäiskauppojen päätöksenteossa. Tutkielman teoreettinen viitekehys ottaa kantaa data-analytiikan kirjavaan käsitteistöön ja sen alakäsitteisiin. Data-analytiikka toimii tutkielmassa sateenvarjoterminä sen alakäsitteistölle, kuten kehittyneelle data-analytiikalle. Teoreettisessa viitekehyksessä käsitellään myös data-analytiikan roolia liiketoiminnassa. Tässä yhteydessä käydään läpi erilaisia hyödyntämistapoja ja näihin liittyviä haasteita. Tutkimus suoritettiin kvalitatiivisin metodein haastatteluiden avulla. Haastatteluita tehtiin yksi jokaiseen kohdeorganisaatioon, jotka edustavat suomalaisen vähittäiskaupan isoimpia markkinatoimijoita. Tutkimus on laadultaan vertaileva tapaustutkimus, jossa laadullisin keinoin pureudutaan data-analytiikan kenttään yrityksissä. Tutkimuksen aineisto jaettiin teemoihin ja siitä pyrittiin näin löytämään keskeiset huomiot. Aineiston löydöksiä verrattiin myös teoreettiseen viitekehykseen, jotta pystyttiin selvittämään, miten kohdeorganisaatioissa havaitut seikat kohtasivat tutkimusteorian kanssa. Tutkimuksen keskeisinä löydöksinä voidaan pitää data-analytiikan nykytilanteen selvittämistä ja data-analytiikan potentiaalin tunnistamista yrityksissä. Päätöksenteon osalta data-analytiikkaa ei hyödynnetä systemaattisesti ja pääpaino data-analytiikan roolissa liiketoimintapäätöksien tukena oli vahvasti painottunut strategiseen päätöksentekoon. Ongelmat data-analytiikan paremmalle hyödyntämiselle liittyivät puolestaan pitkälti organisaatioiden puutteellisiin resursseihin ja epäsystemaattiseen toimintaan
    corecore