8 research outputs found

    A Technique For Timeliness Measurement In Information Manufacturing System (IMS)

    Get PDF
    Timeliness is one of the major dimensions in the field of data quality. Freshness or obsoleteness of data is determined by timeliness data quality dimension. Generally, timeliness is calculated by currency and volatility. Currency is calculated by age, delivery time and input time. On the other side, volatility of data is the duration of the validity of data. Currency and volatility of IMS depend on the factors like refreshment period, waiting period of data in the system, expiry time of the data and the query response time for query requests. Therefore, development a technique for measuring the timeliness of data in IMS is the purpose of this paper

    Security Aspects in Web of Data Based on Trust Principles. A brief of Literature Review

    Get PDF
    Within scientific community, there is a certain consensus to define "Big Data" as a global set, through a complex integration that embraces several dimensions from using of research data, Open Data, Linked Data, Social Network Data, etc. These data are scattered in different sources, which suppose a mix that respond to diverse philosophies, great diversity of structures, different denominations, etc. Its management faces great technological and methodological challenges: The discovery and selection of data, its extraction and final processing, preservation, visualization, access possibility, greater or lesser structuring, between other aspects, that allow showing a huge domain of study at the level of analysis and implementation in different knowledge domains. However, given the data availability and its possible opening: What problems do the data opening face? This paper shows a literature review about these security aspects

    Security Aspects in Web of Data Based on Trust Principles. A brief of Literature Review

    Get PDF
    Within scientific community, there is a certain consensus to define "Big Data" as a global set, through a complex integration that embraces several dimensions from using of research data, Open Data, Linked Data, Social Network Data, etc. These data are scattered in different sources, which suppose a mix that respond to diverse philosophies, great diversity of structures, different denominations, etc. Its management faces great technological and methodological challenges: The discovery and selection of data, its extraction and final processing, preservation, visualization, access possibility, greater or lesser structuring, between other aspects, which allow showing a huge domain of study at the level of analysis and implementation in different knowledge domains. However, given the data availability and its possible opening: What problems do the data opening face? This paper shows a literature review about these security aspects

    Incorporating Domain-Specific Information Quality Constraints into Database Queries

    Get PDF
    The range of information now available in queryable repositories opens up a host of possibilities for new and valuable forms of data analysis. Database query languages such as SQL and XQuery offer a concise and high-level means by which such analyses can be implemented, facilitating the extraction of relevant data subsets into either generic or bespoke data analysis environments. Unfortunately, the quality of data in these repositories is often highly variable. The data is still useful, but only if the consumer is aware of the data quality problems and can work around them. Standard query languages offer little support for this aspect of data management. In principle, however, it should be possible to embed constraints describing the consumer’s data quality requirements into the query directly, so that the query evaluator can take over responsibility for enforcing them during query processing. Most previous attempts to incorporate information quality constraints into database queries have been based around a small number of highly generic quality measures, which are defined and computed by the information provider. This is a useful approach in some application areas but, in practice, quality criteria are more commonly determined by the user of the information not by the provider. In this paper, we explore an approach to incorporating quality constraints into databas

    Apoio à avaliação da qualidade de dados em eScience : uma abordagem baseada em proveniência

    Get PDF
    Orientador: Claudia Maria Bauzer MedeirosTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Qualidade dos dados é um problema recorrente em todos os domínios da ciência. Os experimentos analisam e manipulam uma grande quantidade de conjuntos de dados gerando novos dados para serem (re) utilizados por outros experimentos. A base para a obtenção de bons resultados científicos está fortemente associada ao grau de qualidade de tais da- dos. No entanto, os dados utilizados nos experimentos são manipulados por uma diversa variedade de usuários, os quais visam interesses diferentes de pesquisa, utilizando seus próprios vocabulários, metodologias de trabalho, modelos, e necessidades de amostragem. Considerando este cenário, um desafio em ciência da computação é oferecer soluções que auxiliem aos cientistas na avaliação da qualidade dos seus dados. Diferentes esforços têm sido propostos abordando a avaliação de qualidade. Alguns trabalhos salientam que os atributos de proveniência dos dados poderiam ser utilizados para avaliar qualidade. No entanto, a maioria destas iniciativas aborda a avaliação de um atributo de qualidade específico, frequentemente focando em valores atômicos de dados. Isto reduz a aplicabilidade destas abordagens. Apesar destes esforços, há uma necessidade de novas soluções que os cientistas possam adotar para avaliar o quão bons seus dados são. Nesta pesquisa de doutorado, apresentamos uma abordagem para lidar com este problema, a qual explora a noção de proveniência de dados. Ao contrário de outras abordagens, nossa proposta combina os atributos de qualidade especificados dentro de um contexto pelos especialistas e os metadados que descrevem a proveniência de um conjunto de dados. As principais contribuições deste trabalho são: (i) a especificação de um framework que aproveita a proveniência dos dados para obter informação de qualidade, (ii) uma metodologia associada a este framework que descreve os procedimentos para apoiar a avaliação da qualidade, (iii) a proposta de dois modelos diferentes de proveniência que possibilitem a captura das informações de proveniência, para cenários fixos e extensíveis, e (iv) a validação dos itens (i) a (iii), com suas discussões via estudos de caso em agricultura e biodiversidadeAbstract: Data quality is a recurrent concern in all scientific domains. Experiments analyze and manipulate several kinds of datasets, and generate data to be (re)used by other experiments. The basis for obtaining good scientific results is highly associated with the degree of quality of such datasets. However, data involved with the experiments are manipulated by a wide range of users, with distinct research interests, using their own vocabularies, work methodologies, models, and sampling needs. Given this scenario, a challenge in computer science is to come up with solutions that help scientists to assess the quality of their data. Different efforts have been proposed addressing the estimation of quality. Some of these efforts outline that data provenance attributes should be used to evaluate quality. However, most of these initiatives address the evaluation of a specific quality attribute, frequently focusing on atomic data values, thereby reducing the applicability of these approaches. Taking this scenario into account, there is a need for new solutions that scientists can adopt to assess how good their data are. In this PhD research, we present an approach to attack this problem based on the notion of data provenance. Unlike other similar approaches, our proposal combines quality attributes specified within a context by specialists and metadata on the provenance of a data set. The main contributions of this work are: (i) the specification of a framework that takes advantage of data provenance to derive quality information; (ii) a methodology associated with this framework that outlines the procedures to support the assessment of quality; (iii) the proposal of two different provenance models to capture provenance information, for fixed and extensible scenarios; and (iv) validation of items (i) through (iii), with their discussion via case studies in agriculture and biodiversityDoutoradoCiência da ComputaçãoDoutora em Ciência da Computaçã

    Improvement of data quality with timeliness in information manufacturing system (IMS)

    Get PDF
    Nowadays in the digital world, organizations or enterprises like banks, hospitals, telecommunications or retail shops etc. has an information manufacturing system (IMS) for storing the organization’s data in digital format. Every day, a large quantity of data is manipulated (inserted, deleted and updated) to the information manufacturing system of those enterprises or organizations. To be successful, the IMS must maintain the data and transform it into useful information for decision makers or users. Much of the value will rest in the quality of the data, which may be divided into two classes; objective and time related. In seeking to maintain quality both these classes the completeness, accuracy and consistency of the data and the timeliness of the information generation may be required. As a further complication, Objective data quality class may not be independent. It could be dependent on timeliness of time related class. The main purpose of this research is the improvement of data quality with timeliness in IMS. This starts with observing the reasons for the change of objective data quality over time by using both theoretical and experimental data quality measurements. Novel approaches to ensuring the best possible information quality is developed and evaluated by observing the change of objective data quality scenario with timeliness in a purpose built IMS
    corecore