35 research outputs found

    Ronaldo dos Santos Mello

    Get PDF

    Automatic extraction and representation of geographic entities in eGovernment

    Get PDF
    In this paper we present a system that automatically extracts and geocodes named entities from unstructured, natural language textual documents. The system uses the Geo-Net-PT ontology and Google maps as auxiliary data sources. This type of system is particularly useful to automate the geocoding of existing information in e-government applications, which usually requires human intervention. Within the paper we introduce the relevant human language technologies, describe the system that was developed, present and discuss the preliminary results and draw the relevant conclusions and future work

    Frank Siqueira

    Get PDF

    DYNASTAT: A Methodology for Dynamic and Static Modeling of Multi-agent Systems

    Get PDF
    Multi-agent systems are increasingly being used within various knowledge domains. The need for modeling of the multi-agent systems in a systematic and effective way is becoming more evident. In this chapter, we present the DYNASTAT methodology. This methodology involves a conceptual overview of multi-agent systems, a selection of specific agent characteristics to model, and a discussion of what has to be modeled for each of these agent characteristics. DYNASTAT is independent of any particular modeling language but provides a framework that can be used to realize a particular language in the context of a real-world example. UML 2.2 was chosen as the modeling language to implement the DYNASTAT methodology and this was illustrated using examples from the medical domain. Several UML 2.2 diagrams were selected including a use case, composite structure, sequence and activity diagram to model a multi-agent system able to assist botha medical researcher and a primary care physician. UML 2.2 provides a framework for effective modeling of agent-based systems in a standardized way which this chapter endeavors to demonstrate

    Final report : PATTON Alliance gazetteer evaluation project.

    Full text link

    Technical Privacy Metrics: a Systematic Survey

    Get PDF
    The file attached to this record is the author's final peer reviewed versionThe goal of privacy metrics is to measure the degree of privacy enjoyed by users in a system and the amount of protection offered by privacy-enhancing technologies. In this way, privacy metrics contribute to improving user privacy in the digital world. The diversity and complexity of privacy metrics in the literature makes an informed choice of metrics challenging. As a result, instead of using existing metrics, new metrics are proposed frequently, and privacy studies are often incomparable. In this survey we alleviate these problems by structuring the landscape of privacy metrics. To this end, we explain and discuss a selection of over eighty privacy metrics and introduce categorizations based on the aspect of privacy they measure, their required inputs, and the type of data that needs protection. In addition, we present a method on how to choose privacy metrics based on nine questions that help identify the right privacy metrics for a given scenario, and highlight topics where additional work on privacy metrics is needed. Our survey spans multiple privacy domains and can be understood as a general framework for privacy measurement

    Mapper: an efficient data transformation operator

    Get PDF
    Tese de doutoramento em Informática (Engenharia Informática), apresentada à Universidade de Lisboa através da Faculdade de Ciências, 2008Data transformations are fundamental operations in legacy data migration, data integration, data cleaning, and data warehousing. These operations are often implemented as relational queries that aim at leveraging the optimization capabilities of most DBMSs. However, relational query languages like SQL are not expressive enough to specify one-to-many data transformations, an important class of data transformations that produce several output tuples for a single input tuple. These transformations are required for solving several types of data heterogeneities, like those that occur when the source data represents aggregations of the target data. This thesis proposes a new relational operator, named data mapper, as an extension to the relational algebra to address one-to-many data transformations and focus on its optimization. It also provides algebraic rewriting rules and execution algorithms for the logical and physical optimization, respectively. As a result, queries may be expressed as a combination of standard relational operators and mappers. The proposed optimizations have been experimentally validated and the key factors that influence the obtained performance gains identified. Keywords: Relational Algebra, Data Transformation, Data Integration, Data Cleaning, Data WarehousingAs transformações de dados são operações fundamentais em processos de migração de dados de sistemas legados, integração de dados, limpeza de dados e ao refrescamento de Data Warehouses. Usualmente, estas operações são implementadas através de interrogações relacionais por forma a explorar as optimizações proporcionadas pela maioria dos SGBDs. No entanto, as linguagens de interrogação relacionais, como o SQL, não são suficientemente expressivas para especificar as transformações de dados do tipo um-para-muitos. Esta importante classe de transformações é necessária para resolver de forma adequada diversos tipos de heterogeneidades de dados tais como as que decorrem de situações em que os dados do esquema origem representam uma agregação dos dados do sistema destino. Esta tese propõe a extensão da álgebra relacional com um novo operador relacional denominado data mapper, por forma a permitir a especificação e optimização de transformações de dados um-para-muitos. O trabalho apresenta regras de reescrita algébrica juntamente com diversos algoritmos de execução que proporcionam, respectivamente, a optimização lógica e física de transformações de dados um-para-muitos. Como resultado, é possivel optimizar transformações de dados que combinem operadores relacionais comuns com data mappers. As optimizações propostas foram validadas experimentalmente e identificados os factores que influênciam os seus respectivos ganhos
    corecore