536,371 research outputs found

    From Information to Knowledge: Business Intelligence Usage and Perspectives

    Get PDF
    A lack of quality data is one of the managing problems. It does not mean that they do not exist, but on the contrary, we are usually swamped with unnecessary information. The problem is how to extract the essential data for decision-making from the large amount of data. The data are the part of the organization\u27s assets and, together with the capital and human resources, are an important part of the overall competitiveness. New technologies to support taking the correct and valid conclusions from the "enormous" amounts of data are created every day. Business intelligence and knowledge management are indispensable elements of successful business systems and public administration strategy. The concept of business intelligence or business information management is one of the modern systems that offered the possibility of a comprehensive and efficient usage of information. The concept of business information management also provides usage of the remaining collected data and converting them into useful information and knowledge. The information technology development in recent years provides the ability to store large amounts of information at lower costs, and people share their knowledge and jointly and interactively work at large distances. If we take into account the aspirations of Bosnia and Herzegovina toward accession to the EU, the concept of business intelligence is even more important as our systems can be connected with the systems at the European Union level, or the public administration interoperability in the European context can be achieved. Goal of the paper is to review the notion of business intelligence, and to assess the level of business intelligence usage in the public organizations in Bosnia and Herzegovina. This work is licensed under a&nbsp;Creative Commons Attribution-NonCommercial 4.0 International License.</p

    From Information to Knowledge: Business Intelligence Usage and Perspectives

    Get PDF
    A lack of quality data is one of the managing problems. It does not mean that they do not exist, but on the contrary, we are usually swamped with unnecessary information. The problem is how to extract the essential data for decision-making from the large amount of data. The data are the part of the organization\u27s assets and, together with the capital and human resources, are an important part of the overall competitiveness. New technologies to support taking the correct and valid conclusions from the "enormous" amounts of data are created every day. Business intelligence and knowledge management are indispensable elements of successful business systems and public administration strategy. The concept of business intelligence or business information management is one of the modern systems that offered the possibility of a comprehensive and efficient usage of information. The concept of business information management also provides usage of the remaining collected data and converting them into useful information and knowledge. The information technology development in recent years provides the ability to store large amounts of information at lower costs, and people share their knowledge and jointly and interactively work at large distances. If we take into account the aspirations of Bosnia and Herzegovina toward accession to the EU, the concept of business intelligence is even more important as our systems can be connected with the systems at the European Union level, or the public administration interoperability in the European context can be achieved. Goal of the paper is to review the notion of business intelligence, and to assess the level of business intelligence usage in the public organizations in Bosnia and Herzegovina. This work is licensed under a&nbsp;Creative Commons Attribution-NonCommercial 4.0 International License.</p

    Pipelines for Procedural Information Extraction from Scientific Literature: Towards Recipes using Machine Learning and Data Science

    Full text link
    This paper describes a machine learning and data science pipeline for structured information extraction from documents, implemented as a suite of open-source tools and extensions to existing tools. It centers around a methodology for extracting procedural information in the form of recipes, stepwise procedures for creating an artifact (in this case synthesizing a nanomaterial), from published scientific literature. From our overall goal of producing recipes from free text, we derive the technical objectives of a system consisting of pipeline stages: document acquisition and filtering, payload extraction, recipe step extraction as a relationship extraction task, recipe assembly, and presentation through an information retrieval interface with question answering (QA) functionality. This system meets computational information and knowledge management (CIKM) requirements of metadata-driven payload extraction, named entity extraction, and relationship extraction from text. Functional contributions described in this paper include semi-supervised machine learning methods for PDF filtering and payload extraction tasks, followed by structured extraction and data transformation tasks beginning with section extraction, recipe steps as information tuples, and finally assembled recipes. Measurable objective criteria for extraction quality include precision and recall of recipe steps, ordering constraints, and QA accuracy, precision, and recall. Results, key novel contributions, and significant open problems derived from this work center around the attribution of these holistic quality measures to specific machine learning and inference stages of the pipeline, each with their performance measures. The desired recipes contain identified preconditions, material inputs, and operations, and constitute the overall output generated by our computational information and knowledge management (CIKM) system.Comment: 15th International Conference on Document Analysis and Recognition Workshops (ICDARW 2019

    THE VALUE OF DEEP LEARNING FOR LANDSCAPE REPRESENTATION COMPARISON BETWEEN SEGMENTATION IMAGES MAPS AND GIS

    Get PDF
    Abstract. Landscape refers to the qualities of a place, the result of a structural, territorial and environmental component, and the attribution of meanings, which is certainly the fundamental issue of the interpretative process. Percepire etymologically derives from "per", which means "by means of, through", and "capere", which translates as "to take", "to collect" (information, sensory data), "to learn". Since images are derived from the territory, it is of first interest to propose a comparison between representations derived from automated processes on photographs and the synthetic data interpreting the territory inherent in the plans developed with GIS in order to obtain a more precise perceptual analysis. The emergence of new tools for the processing and reproduction of data offers new opportunities for the knowledge and representation of the landscape, in architectural and urban contexts, and the integrative support that these processes can bring to the representation of the qualities of a place have to be reinterpreted in a Spatial Information Dataset in order to make synthetic and intelligible information. Identifying specific themes by questioning these data through criteria and placing at the centre the capacity of the digital environment in its mathematisation to compare data, transforming them into information, in an automated process is aimed at the exploitation of Big Data and the full replicability of the procedure. In this way, it is possible to enter into the analysis of the quality of space, of that notion of landscape concieved as "that part of the territory perceived by the population that lives it"

    Contributions of scale: What we stand to gain from Indigenous and local inclusion in climate-health monitoring and surveillance systems

    Get PDF
    Understanding how climate change will affect global health is a defining challenge this century. This is predicated, however, on our ability to combine climate and health data to investigate the ways in which variations in climate, weather, and health outcomes interact. There is growing evidence to support the value of place- and community-based monitoring and surveillance efforts, which can contribute to improving both the quality and equity of data collection needed to investigate and understand the impacts of climate change on health. The inclusion of multiple and diverse knowledge systems in climate-health surveillance presents many benefits, as well as challenges. We conducted a systematic review, synthesis, and confidence assessment of the published literature on integrated monitoring and surveillance systems for climate change and public health. We examined the inclusion of diverse knowledge systems in climate-health literature, focusing on: 1) analytical framing of integrated monitoring and surveillance system processes 2) key contributions of Indigenous knowledge and local knowledge systems to integrated monitoring and surveillance systems processes; and 3) patterns of inclusion within these processes. In total, 24 studies met the inclusion criteria and were included for data extraction, appraisal, and analysis. Our findings indicate that the inclusion of diverse knowledge systems contributes to integrated climate-health monitoring and surveillance systems across multiple processes of detection, attribution, and action. These contributions include: the definition of meaningful problems; the collection of more responsive data; the reduction of selection and source biases; the processing and interpretation of more comprehensive datasets; the reduction of scale dependent biases; the development of multi-scale policy; long-term future planning; immediate decision making and prioritization of key issues; as well as creating effective knowledge-information-action pathways. The value of our findings and this review is to demonstrate how neither scientific, Indigenous, nor local knowledge systems alone will be able to contribute the breadth and depth of information necessary to detect, attribute, and inform action along these pathways of climate-health impact. Rather, it is the divergence or discordance between the methodologies and evidences of different knowledge systems that can contribute uniquely to this understanding. We critically discuss the possibility of what we, mainly local communities and experts, stand to lose if these processes of inclusion are not equitable. We explore how to shift the existing patterns of inclusion into balance by ensuring the equity of contributions and justice of inclusion in these integrated monitoring and surveillance system processes

    Concurrent use of prescription drugs and herbal medicinal products in older adults: A systematic review

    Get PDF
    This is an Open Access article distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.The use of herbal medicinal products (HMPs) is common among older adults. However, little is known about concurrent use with prescription drugs as well as the potential interactions associated with such combinations. Objective Identify and evaluate the literature on concurrent prescription and HMPs use among older adults to assess prevalence, patterns, potential interactions and factors associated with this use. Methods Systematic searches in MEDLINE, PsycINFO, EMBASE, CINAHL, AMED, Web of Science and Cochrane from inception to May 2017 for studies reporting concurrent use of prescription medicines with HMPs in adults (≥65 years). Quality was assessed using the Joanna Briggs Institute checklists. The Evidence for Policy and Practice Information and Co-ordinating Centre (EPPI-Centre) three stage approach to mixed method research was used to synthesise data. Results Twenty-two studies were included. A definition of HMPs or what was considered HMP was frequently missing. Prevalence of concurrent use by older adults varied widely between 5.3% and 88.3%. Prescription medicines most combined with HMPs were antihypertensive drugs, beta blockers, diuretics, antihyperlipidemic agents, anticoagulants, analgesics, antihistamines, antidiabetics, antidepressants and statins. The HMPs most frequently used were: ginkgo, garlic, ginseng, St John’s wort, Echinacea, saw palmetto, evening primrose oil and ginger. Potential risks of bleeding due to use of ginkgo, garlic or ginseng with aspirin or warfarin was the most reported herb-drug interaction. Some data suggests being female, a lower household income and less than high school education were associated with concurrent use. Conclusion Prevalence of concurrent prescription drugs and HMPs use among older adults is substantial and potential interactions have been reported. Knowledge of the extent and manner in which older adults combine prescription drugs will aid healthcare professionals can appropriately identify and manage patients at risk.Peer reviewedFinal Published versio

    ArrayWiki: an enabling technology for sharing public microarray data repositories and meta-analyses

    Get PDF
    © 2008 Stokes et al.; licensee BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.DOI: 10.1186/1471-2105-9-S6-S18Background. A survey of microarray databases reveals that most of the repository contents and data models are heterogeneous (i.e., data obtained from different chip manufacturers), and that the repositories provide only basic biological keywords linking to PubMed. As a result, it is difficult to find datasets using research context or analysis parameters information beyond a few keywords. For example, to reduce the "curse-of-dimension" problem in microarray analysis, the number of samples is often increased by merging array data from different datasets. Knowing chip data parameters such as pre-processing steps (e.g., normalization, artefact removal, etc), and knowing any previous biological validation of the dataset is essential due to the heterogeneity of the data. However, most of the microarray repositories do not have meta-data information in the first place, and do not have a a mechanism to add or insert this information. Thus, there is a critical need to create "intelligent" microarray repositories that (1) enable update of meta-data with the raw array data, and (2) provide standardized archiving protocols to minimize bias from the raw data sources. Results. To address the problems discussed, we have developed a community maintained system called ArrayWiki that unites disparate meta-data of microarray meta-experiments from multiple primary sources with four key features. First, ArrayWiki provides a user-friendly knowledge management interface in addition to a programmable interface using standards developed by Wikipedia. Second, ArrayWiki includes automated quality control processes (caCORRECT) and novel visualization methods (BioPNG, Gel Plots), which provide extra information about data quality unavailable in other microarray repositories. Third, it provides a user-curation capability through the familiar Wiki interface. Fourth, ArrayWiki provides users with simple text-based searches across all experiment meta-data, and exposes data to search engine crawlers (Semantic Agents) such as Google to further enhance data discovery. Conclusions. Microarray data and meta information in ArrayWiki are distributed and visualized using a novel and compact data storage format, BioPNG. Also, they are open to the research community for curation, modification, and contribution. By making a small investment of time to learn the syntax and structure common to all sites running MediaWiki software, domain scientists and practioners can all contribute to make better use of microarray technologies in research and medical practices. ArrayWiki is available at http://www.bio-miblab.org/arraywiki

    Biological & Chemical Oceanography Data Management Office : a domain-specific repository for oceanographic data from around the world [poster]

    Get PDF
    Presented at AGU Ocean Sciences, 11 - 16 February 2018, Portland, ORThe Biological and Chemical Oceanography Data Management Office (BCO-DMO) is a domain-specific digital data repository that works with investigators funded under the National Science Foundation’s Division of Ocean Sciences and Office of Polar Programs to manage their data free of charge. Data managers work closely with investigators to satisfy their data sharing requirements and to develop comprehensive Data Management Plans, as well as to ensure that their data will be well described with extensive metadata creation. Additionally, BCO-DMO offers tools to find and reuse these high-quality data and metadata packages, and services such as DOI generation for publication and attribution. These resources are free for all to discover, access, and utilize. As a repository embedded in our research community, BCO-DMO is well positioned to offer knowledge and expertise from both domain trained data managers and the scientific community at large. BCO-DMO is currently home to more than 9000 datasets and 900 projects, all of which are or will be submitted for archive at the National Centers for Environmental Information (NCEI). Our data holdings continue to grow, and encompass a wide range of oceanographic research areas, including biological, chemical, physical, and ecological. These data represent cruises and experiments from around the world, and are managed using community best practices, standards, and technologies to ensure accuracy and promote re-use. BCO-DMO is a repository and tool for investigators, offering both ocean science data and resources for data dissemination and publication.NSF #143557

    A stochastic model for CD4+ T cell proliferation and dissemination network in primary immune response

    Get PDF
    The study of the initial phase of the adaptive immune response after first antigen encounter provides essential information on the magnitude and quality of the immune response. This phase is characterized by proliferation and dissemination of T cells in the lymphoid organs. Modeling and identifying the key features of this phenomenon may provide a useful tool for the analysis and prediction of the effects of immunization. This knowledge can be effectively exploited in vaccinology, where it is of interest to evaluate and compare the responses to different vaccine formulations. The objective of this paper is to construct a stochastic model based on branching process theory, for the dissemination network of antigen-specific CD4+ T cells. The devised model is validated on in vivo animal experimental data. The model presented has been applied to the vaccine immunization context making references to simple proliferation laws that take into account division, death and quiescence, but it can also be applied to any context where it is of interest to study the dynamic evolution of a population. Copyright:© 2015 Boianelli et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited
    • …
    corecore