11 research outputs found
Intégration holistique et entreposage automatique des données ouvertes
Statistical Open Data present useful information to feed up a decision-making system. Their integration and storage within these systems is achieved through ETL processes. It is necessary to automate these processes in order to facilitate their accessibility to non-experts. These processes have also need to face out the problems of lack of schemes and structural and sematic heterogeneity, which characterize the Open Data. To meet these issues, we propose a new ETL approach based on graphs. For the extraction, we propose automatic activities performing detection and annotations based on a model of a table. For the transformation, we propose a linear program fulfilling holistic integration of several graphs. This model supplies an optimal and a unique solution. For the loading, we propose a progressive process for the definition of the multidimensional schema and the augmentation of the integrated graph. Finally, we present a prototype and the experimental evaluations.Les statistiques présentes dans les Open Data ou données ouvertes constituent des informations utiles pour alimenter un système décisionnel. Leur intégration et leur entreposage au sein du système décisionnel se fait à travers des processus ETL. Il faut automatiser ces processus afin de faciliter leur accessibilité à des non-experts. Ces processus doivent pallier aux problèmes de manque de schémas, d'hétérogénéité structurelle et sémantique qui caractérisent les données ouvertes. Afin de répondre à ces problématiques, nous proposons une nouvelle démarche ETL basée sur les graphes. Pour l'extraction du graphe d'un tableau, nous proposons des activités de détection et d'annotation automatiques. Pour la transformation, nous proposons un programme linéaire pour résoudre le problème d'appariement holistique de données structurelles provenant de plusieurs graphes. Ce modèle fournit une solution optimale et unique. Pour le chargement, nous proposons un processus progressif pour la définition du schéma multidimensionnel et l'augmentation du graphe intégré. Enfin, nous présentons un prototype et les résultats d'expérimentations
Business Intelligence on Non-Conventional Data
The revolution in digital communications witnessed over the last decade had a significant impact on the world of Business Intelligence (BI). In the big data era, the amount and diversity of data that can be collected and analyzed for the decision-making process transcends the restricted and structured set of internal data that BI systems are conventionally limited to. This thesis investigates the unique challenges imposed by three specific categories of non-conventional data: social data, linked data and schemaless data. Social data comprises the user-generated contents published through websites and social media, which can provide a fresh and timely perception about people’s tastes and opinions. In Social BI (SBI), the analysis focuses on topics, meant as specific concepts of interest within the subject area. In this context, this thesis proposes meta-star, an alternative strategy to the traditional star-schema for modeling hierarchies of topics to enable OLAP analyses. The thesis also presents an architectural framework of a real SBI project and a cross-disciplinary benchmark for SBI. Linked data employ the Resource Description Framework (RDF) to provide a public network of interlinked, structured, cross-domain knowledge. In this context, this thesis proposes an interactive and collaborative approach to build aggregation hierarchies from linked data. Schemaless data refers to the storage of data in NoSQL databases that do not force a predefined schema, but let database instances embed their own local schemata. In this context, this thesis proposes an approach to determine the schema profile of a document-based database; the goal is to facilitate users in a schema-on-read analysis process by understanding the rules that drove the usage of the different schemata. A final and complementary contribution of this thesis is an innovative technique in the field of recommendation systems to overcome user disorientation in the analysis of a large and heterogeneous wealth of data
Diseño de un Almacén de Datos Históricos en el marco del desarrollo de software dirigido por modelos
Un Decision Support System (DSS) asiste a los usuarios en el proceso de análisis de datos en una organización con el propósito de producir información que les permita tomar mejores decisiones. Los analistas que utilizan el DSS están más interesados en identificar tendencias que en buscar algún registro individual en forma aislada [HRU96]. Con ese propósito, los datos de las diferentes transacciones se almacenan y consolidan en una base de datos central denominada Data Warehouse (DW); los analistas utilizan esas estructuras de datos para extraer información de sus negocios que les permita tomar mejores decisiones [GHRU97]. Basándose en el esquema de datos fuente y en los requisitos de información de la organización, el objetivo del diseñador de un DSS es sintetizar esos datos para reducirlos a un formato que le permita, al usuario de la aplicación, utilizarlos en el análisis del comportamiento de la empresa. Dos tipos diferentes (pero relacionados) de actividades están presentes: el diseño de las estructuras de almacenamiento y la creación de consultas sobre esas estructuras. La primera tarea se desarrolla en el ámbito de los diseñadores de aplicaciones informáticas; la segunda, en la esfera de los usuarios finales. Ambas actividades, normalmente, se realizan con escasa asistencia de herramientas automatizadas. A partir de lo expresado anteriormente Identificamos, por consiguiente, tres problemas a resolver: a) la creación de estructuras de almacenamiento eficientes para la toma de decisión, b) la simplificación en la obtención de la información sobre esas estructuras para el usuario final y, c) la automatización, tanto del proceso de diseño de las estructuras de almacenamiento, como en la elaboración iterativa de consultas por parte del usuario de la aplicación. La solución propuesta es el diseño de una nueva estructura de almacenamiento que denominaremos Historical Data Warehouse (HDW) que combina, en un modelo integrado, un Historical Data Base (HDB) y un DW; el diseño de una interface gráfica, derivada del HDW, que permite realizar consultas en forma automática y, por último, el desarrollo de un método de diseño que engloba ambas propuestas en el marco del Model Driven Software Development (MDD).Facultad de Informátic
An evaluation of the challenges of Multilingualism in Data Warehouse development
In this paper we discuss Business Intelligence and define what is meant by support for Multilingualism in a Business Intelligence reporting context. We identify support for Multilingualism as a challenging issue which has implications for data warehouse design and reporting performance. Data warehouses are a core component of most Business Intelligence systems and the star schema is the approach most widely used to develop data warehouses and dimensional Data Marts. We discuss the way in which Multilingualism can be supported in the Star Schema and identify that current approaches have serious limitations which include data redundancy and data manipulation, performance and maintenance issues. We propose a new approach to enable the optimal application of multilingualism in Business Intelligence. The proposed approach was found to produce satisfactory results when used in a proof-of-concept environment. Future work will include testing the approach in an enterprise environmen
A graph-based framework for data retrieved from criminal-related documents
A digitalização das empresas e dos serviços tem potenciado o tratamento e análise de um crescente volume
de dados provenientes de fontes heterogeneas, com desafios emergentes, nomeadamente ao nível da representação
do conhecimento. Também os Órgãos de Polícia Criminal (OPC) enfrentam o mesmo desafio,
tendo em conta o volume de dados não estruturados, provenientes de relatórios policiais, sendo analisados
manualmente pelo investigadores criminais, consumindo tempo e recursos.
Assim, a necessidade de extrair e representar os dados não estruturados existentes em documentos relacionados
com o crime, de uma forma automática, permitindo a redução da análise manual efetuada pelos
investigadores criminais. Apresenta-se como um desafio para a ciência dos computadores, dando a possibilidade
de propor uma alternativa computacional que permita extrair e representar os dados, adaptando
ou propondo métodos computacionais novos.
Actualmente existem vários métodos computacionais aplicados ao domínio criminal, nomeadamente a identificação
e classificação de entidades nomeadas, por exemplo narcóticos, ou a extracção de relações entre
entidades relevantes para a investigação criminal. Estes métodos são maioritariamente aplicadas à lingua
inglesa, e em Portugal não há muita atenção à investigação nesta área, inviabilizando a sua aplicação no
contexto da investigação criminal.
Esta tese propõe uma solução integrada para a representação dos dados não estruturados existentes em
documentos, usando um conjunto de métodos computacionais: Preprocessamento de Documentos, que
agrupa uma tarefa de Extracção, Transformação e Carregamento adaptado aos documentos relacionados
com o crime, seguido por um pipeline de Processamento de Linguagem Natural aplicado à lingua portuguesa,
para uma análise sintática e semântica dos dados textuais; Método de Extracção de Informação 5W1H
que agrupa métodos de Reconhecimento de Entidades Nomeadas, a detecção da função semântica e a
extracção de termos criminais; Preenchimento da Base de Dados de Grafos e Enriquecimento, permitindo
a representação dos dados obtidos numa base de dados de grafos Neo4j. Globalmente a solução integrada apresenta resultados promissores, cujos resultados foram validados usando
protótipos desemvolvidos para o efeito. Demonstrou-se ainda a viabilidade da extracção dos dados não
estruturados, a sua interpretação sintática e semântica, bem como a representação na base de dados de
grafos; Abstract:
The digitalization of companies processes has enhanced the treatment and analysis of a growing volume
of data from heterogeneous sources, with emerging challenges, namely those related to knowledge representation.
The Criminal Police has similar challenges, considering the amount of unstructured data from
police reports manually analyzed by criminal investigators, with the corresponding time and resources.
There is a need to automatically extract and represent the unstructured data existing in criminal-related
documents and reduce the manual analysis by criminal investigators. Computer science faces a challenge
to apply emergent computational models that can be an alternative to extract and represent the data using
new or existing methods.
A broad set of computational methods have been applied to the criminal domain, such as the identification
and classification named-entities (NEs) or extraction of relations between the entities that are relevant for
the criminal investigation, like narcotics. However, these methods have mainly been used in the English
language. In Portugal, the research on this domain, applying computational methods, lacks related works,
making its application in criminal investigation unfeasible.
This thesis proposes an integrated solution for the representation of unstructured data retrieved from
documents, using a set of computational methods, such as Preprocessing Criminal-Related Documents
module. This module is supported by Extraction, Transformation, and Loading tasks. Followed by a
Natural Language Processing pipeline applied to the Portuguese language, for syntactic and semantic
analysis of textual data. Next, the 5W1H Information Extraction Method combines the Named-Entity
Recognition, Semantic Role Labelling, and Criminal Terms Extraction tasks. Finally, the Graph Database
Population and Enrichment allows us the representation of data retrieved into a Neo4j graph database.
Globally, the framework presents promising results that were validated using prototypes developed for this
purpose. In addition, the feasibility of extracting unstructured data, its syntactic and semantic interpretation,
and the graph database representation has also been demonstrated
Workshop, Long and Short Paper, and Poster Proceedings from the Fourth Immersive Learning Research Network Conference (iLRN 2018 Montana), 2018.
ILRN 2018 - Conferência internacional realizada em Montana de 24-29 de june de 2018.Workshop, short paper, and long paper proceedingsinfo:eu-repo/semantics/publishedVersio
Anales del XIII Congreso Argentino de Ciencias de la Computación (CACIC)
Contenido:
Arquitecturas de computadoras
Sistemas embebidos
Arquitecturas orientadas a servicios (SOA)
Redes de comunicaciones
Redes heterogéneas
Redes de Avanzada
Redes inalámbricas
Redes móviles
Redes activas
Administración y monitoreo de redes y servicios
Calidad de Servicio (QoS, SLAs)
Seguridad informática y autenticación, privacidad
Infraestructura para firma digital y certificados digitales
Análisis y detección de vulnerabilidades
Sistemas operativos
Sistemas P2P
Middleware
Infraestructura para grid
Servicios de integración (Web Services o .Net)Red de Universidades con Carreras en Informática (RedUNCI