1,208 research outputs found
A unified view of data-intensive flows in business intelligence systems : a survey
Data-intensive flows are central processes in today’s business intelligence (BI) systems, deploying different technologies to deliver data, from a multitude of data sources, in user-preferred and analysis-ready formats. To meet complex requirements of next generation BI systems, we often need an effective combination of the traditionally batched extract-transform-load (ETL) processes that populate a data warehouse (DW) from integrated data sources, and more real-time and operational data flows that integrate source data at runtime. Both academia and industry thus must have a clear understanding of the foundations of data-intensive flows and the challenges of moving towards next generation BI environments. In this paper we present a survey of today’s research on data-intensive flows and the related fundamental fields of database theory. The study is based on a proposed set of dimensions describing the important challenges of data-intensive flows in the next generation BI setting. As a result of this survey, we envision an architecture of a system for managing the lifecycle of data-intensive flows. The results further provide a comprehensive understanding of data-intensive flows, recognizing challenges that still are to be addressed, and how the current solutions can be applied for addressing these challenges.Peer ReviewedPostprint (author's final draft
Aspects of semantic ETL
Tesi en modalitat de cotutela: Universitat Politècnica de Catalunya i Aalborg UniversitetBusiness Intelligence tools support making better business decisions by analyzing available organizational data. Data Warehouses (DWs), typically structured with the Multidimensional (MD) model, are used to store data from different internal and external sources processed using Extract-Transformation-Load (ETL) processes. On-Line analytical Processing (OLAP) queries are applied on DWs to derive important business-critical knowledge. DW and OLAP technologies perform efficiently when they are applied on data that are static in nature and well organized in structure. Nowadays, Semantic Web technologies and the Linked Data principles inspire organizations to publish their semantic data, which allow machines to understand the meaning of data, using the Resource Description Framework (RDF) model. In addition to traditional (non-semantic) data sources, the incorporation of semantic data sources into a DW raises the additional challenges of schema derivation, semantic heterogeneity, and schema and data management model over traditional ETL tools. Furthermore, most SW data provided by business, academic and governmental organizations include facts and figures, which raise new requirements for BI tools to enable OLAP-like analyses over those semantic (RDF) data. In this thesis, we 1) propose a layer-based ETL framework for handling diverse semantic and non-semantic data sources by addressing the challenges mentioned above, 2) propose a set of high-level ETL constructs for processing semantic data, 3) implement appropriate environments (both programmable and GUI) to facilitate ETL processes and evaluate the proposed solutions. Our ETL framework is a semantic ETL framework because it integrates data semantically. We propose SETL, a unified framework for semantic ETL. The framework is divided into three layers: the Definition Layer, ETL Layer, and Data Warehouse Layer. In the Definition Layer, the semantic DW (SDW) schema, sources, and the mappings among the sources and the target are defined. In the ETL Layer, ETL processes to populate the SDW from sources are designed. The Data Warehouse Layer manages the storage of transformed semantic data. The framework supports the inclusion of semantic (RDF) data in DWs in addition to relational data. It allows users to define an ontology of a DW and annotate it with MD constructs (such as dimensions, cubes, levels, etc.) using the Data Cube for OLAP (QB4OLAP) vocabulary. It supports traditional transformation operations and provides a method to generate semantic data from the source data according to the semantics encoded in the ontology. It also provides a method to connect internal SDW data with external knowledge bases. On top of SETL, we propose SETLCONSTUCT where we define a set of high-level ETL tasks/operations to process semantic data sources. We divide the integration process into two layers: the Definition Layer and Execution Layer. The Definition Layer includes two tasks that allow DW designers to define target (SDW) schemas and the mappings between (intermediate) sources and the (intermediate) target. To create mappings among the sources and target constructs, we provide a mapping vocabulary called S2TMAP. Different from other ETL tools, we propose a new paradigm: we characterize the ETL flow transformations at the Definition Layer instead of independently within each ETL operation (in the Execution Layer). This way, the designer has an overall view of the process, which generates metadata (the mapping file) that the ETL operators will read and parametrize themselves with automatically. In the Execution Layer, we propose a set of high-level ETL operations to process semantic data sources. Finally, we develop a GUI-based semantic BI system SETLBI to define, process, integrate, and query semantic and non-semantic data. In addition to the Definition Layer and the ETL Layer, SETLBI has the OLAP Layer, which provides an interactive interface to enable OLAP analysis over the semantic DWLes eines d’Intel·ligència Empresarial (BI), conegudes en anglès com Business
Intelligence, donen suport a la millora de la presa de decisions empresarials
mitjançant l’anàlisi de les dades de l’organització disponibles. Els magatzems
de dades, o data warehouse, (DWs), típicament estructurats seguint el model
Multidimensional (MD), s’utilitzen per emmagatzemar dades de diferents
fonts, tant internes com externes, processades mitjançant processos Extract-
Transformation-Load (ETL). Les consultes de processament analític en línia
(OLAP) s’apliquen als DW per extraure coneixement crític en l’àmbit empresarial.
Els DW i les tecnologies OLAP funcionen de manera eficient quan
s’apliquen sobre dades de natura estàtica i ben estructurades. Avui en dia,
les tecnologies de la Web Semàntica (SW) i els principis Linked Data (LD) inspiren les organitzacions per publicar les seves dades en formats semàntics,
que permeten que les màquines entenguin el significat de les dades, mitjançant
el llenguatge de descripció de recursos (RDF). Una de les raons per
les quals les dades semàntiques han tingut tant d’èxit és que es poden gestionar i fer que estiguin disponibles per tercers amb poc esforç, i no depenen d’esquemes de dades sofisticats.
A més de les fonts de dades tradicionals (no semàntiques), la incorporació
de fonts de dades semàntiques en un DW planteja reptes addicionals
tals com derivar-hi esquema, l’heterogeneïtat semàntica i la representació de
l’esquema i les dades a través d’eines d’ETL. A més, la majoria de dades SW
proporcionades per empreses, organitzacions acadèmiques o governamentals
inclouen fets i figures que representen nous reptes per les eines de BI per tal
d’habilitar l’anàlisi OLAP sobre dades semàntiques (RDF). En aquesta tesi, 1)
proposem un marc ETL basat en capes per a la gestió de diverses fonts de
dades semàntiques i no semàntiques i adreçant els reptes esmentats anteriorment, 2) proposem un conjunt d’operacions ETL per processar dades semàntiques, i 3) la creació d’entorns apropiats de desenvolupament (programàtics i GUIs) per facilitar la creació i gestió de DW i processos ETL semàntics, així com avaluar les solucions proposades. El nostre marc ETL és un marc ETL semàntic perquè Es capaç de considerar e integrar dades de forma semàntica.
Els següents paràgrafs elaboren sobre aquests contribucions.
Proposem SETL, un marc unificat per a ETL semàntic. El marc es divideix
en tres capes: la capa de definició, la capa ETL i la capa DW. A la
capa de definició, es defineixen l’esquema del DW semàntic (SDW), les fonts
i els mappings entre les fonts i l’esquema del DW. A la capa ETL, es dissenyen
processos ETL per popular el SDW a partir de fonts. A la capa DW,
es gestiona l’emmagatzematge de les dades semàntiques transformades. El
nostre marc dóna suport a la inclusió de dades semàntiques (RDF) en DWs,
a més de dades relacionals. Així, permet als usuaris definir una ontologia
d’un DW i anotar-la amb construccions MD (com ara dimensions, cubs, nivells,
etc.) utilitzant el vocabulari Data Cube for OLAP (QB4OLAP). També
admet operacions de transformació tradicionals i proporciona un mètode per
generar semàntica de les dades d’origen segons la semàntica codificada al
document ontologia. També proporciona un mètode per connectar l’SDW
amb bases de coneixement externes. Per tant, crea una base de coneixement,
composta per un ontologia i les seves instàncies, on les dades estan
connectades semànticament amb altres dades externes / internes. Per fer-ho,
desenvolupem un mètode programàtic, basat en Python, d’alt nivell, per
realitzar les tasques esmentades anteriorment. S’ha portat a terme un experiment
complet d’avaluació comparant SETL amb una solució elaborada amb
eines tradicional (que requereixen molta més codificació). Com a cas d’ús,
hem emprat el Danish Agricultural dataset, i els resultats mostren que SETL
proporciona un millor rendiment, millora la productivitat del programador i
la qualitat de la base de coneixement. La comparació entre SETL i Pentaho
Data Integration (PDI) mostra que SETL és un 13,5% més ràpid que PDI. A
més de ser més ràpid que PDI, tracta les dades semàntiques com a ciutadans
de primera classe, mentre que PDI no conté operadors específics per a dades
semàntiques.
A sobre de SETL, proposem SETLCONSTUCT on definim un conjunt de
tasques d’alt nivell / operacions ETL per processar fonts de dades semàntiques
i orientades a encapsular i facilitar la creació de l’ETL semàntic. Dividim
el procés d’integració en dues capes: la capa de definició i la capa
d’execució. La capa de definició inclou dues tasques que permeten definir
als dissenyadors de DW esquemes destí (SDW) i mappings entre fonts (o resultats intermedis) i l’SDW (potencialment, altres resultats intermedis). Per
crear mappings entre les fonts i el SDW, proporcionem un vocabulari de mapping anomenat Source-To-Target Mapping (S2TMAP). A diferència d’altres
eines ETL, proposem un nou paradigma: les transformacions del flux ETL es
caracteritzen a la capa de definició, i no de forma independent dins de cada
operació ETL (a la capa d’execució). Aquest nou paradigma permet al dissenyador tenir una visió global del procés, que genera metadades (el fitxer de mapping) que els operadors ETL individuals llegiran i es parametritzaran automàticament.
A la capa d’execució proposem un conjunt d’operacions ETL d’alt nivell per processar fonts de dades semàntiques. A més de la neteja, la unió i la transformació per dades semàntiques, proposem operacions per generar semàntica multidimensional i actualitzar el SDW per reflectir els canvis
en les fonts. A més, ampliem SETLCONSTRUCT per permetre la generació
automàtica de flux d’execució ETL (l’anomenem SETLAUTO). Finalment, proporcionem una àmplia avaluació per comparar la productivitat, el temps de
desenvolupament i el rendiment de SETLCONSTRUCT i SETLAUTO amb el marc anterior SETL. L’avaluació demostra que SETLCONSTRUCT millora considerablement sobre SETL en termes de productivitat, temps de desenvolupament i rendiment. L’avaluació mostra que 1) SETLCONSTRUCT utilitza un 92% menys de caràcters mecanografiats (NOTC) que SETL, i SETLAUTO redueix encara més el nombre de conceptes usats (NOUC) un altre 25%; 2) utilitzant SETLCONSTRUCT, el temps de desenvolupament es redueix gairebé a la meitat en comparació amb SETL, i es redueix un altre 27 % mitjançant SETLAUTO; 3) SETLCONSTRUCT es escalable i té un rendiment similar en comparació amb SETL.
Finalment, desenvolupem un sistema de BI semàntic basat en GUI SETLBI
per definir, processar, integrar i consultar dades semàntiques i no semàntiques.
A més de la capa de definició i de la capa ETL, SETLBI té una capa OLAP, que proporciona una interfície interactiva per permetre l’anàlisi OLAP
d’autoservei sobre el DW semàntic. Cada capa està composada per un conjunt
d’operacions / tasques. Per formalitzar les connexions intra i inter-capes
dels components de cada capa, emprem una ontologia. La capa ETL amplia
l’execució de la capa de SETLCONSTUCT afegint operacions per processar
fonts de dades no semàntiques. Per últim, demostrem el sistema final mitjançant el cens de la població de Bangladesh (2011).
La solució final d’aquesta tesi és l’eina SETLBI . SETLBI facilita (1) als dissenyadors del DW amb pocs / sense coneixements de SW, integrar semànticament les dades (semàntiques o no) i analitzar-les emprant OLAP, i (2) als usuaris de la SW els permet definir vistes sobre dades semàntiques, integrar-les amb fonts no semàntiques, i visualitzar-les segons el model MD i fer anàlisi OLAP. A més, els usuaris SW poden enriquir l’esquema SDW generat amb construccions RDFS / OWL. Prenent aquest marc com a punt de partida, els investigadors poden emprar-lo per a crear SDWs de forma interactiva i automàtica. Aquest projecte crea un pont entre les tecnologies BI i SW, i obre la porta a altres oportunitats de recerca com desenvolupar tècniques de DW i ETL comprensibles per les màquines.(Danskere) Business Intelligence (BI) værktøjer understøtter at tage bedre forretningsbeslutninger,
ved at analysere tilgængelige organisatoriske data. Data Warehouses
(DWs), typisk konstrueret med den Multidimensionelle (MD) model,
bruges til at lagre data fra forskellige interne og eksterne kilder, der behandles
ved hjælp af Extract-Transformation-Load (ETL) processer. On-Line
Analytical Processing (OLAP) forespørgsler anvendes på DWs for at udlede
vigtig forretningskritisk viden. DW og OLAP-teknologier fungerer effektivt,
når de anvendes på data, som er statiske af natur og velorganiseret i struktur.
I dag inspirerer Semantic Web (SW) teknologier og Linked Data (LD) principper
organisationer til at offentliggøre deres semantiske data, som tillader
maskiner at forstå betydningen af denne, ved hjælp af Resource Description
Framework (RDF) modellen. En af grundene til, at semantiske data er blevet
succesfuldt, er at styringen og udgivelsen af af dataene er nemt, og ikke er
afhængigt af et sofistikeret skema.
Ud over problemer ved overførslen af traditionelle (ikke-semantiske) databaser
til DWs, opstår yderligere udfordringer ved overførslen af semantiske
databaser, såsom skema nedarvning, semantisk heterogenitet samt skemaet
for data repræsentation over traditionelle ETL værktøjer. På den anden side
udgør en stor del af den semantiske data der bliver offentliggjort af virksomheder,
akademikere samt regeringer, af figurer og fakta, der igen giver
nye problemstillinger og krav til BI værktøjer, for at gøre OLAP lignende
analyser over de semantiske data mulige. I denne afhandling gør vi følgende:
1) foreslår et lag-baseret ETL framework til at håndterer multiple
semantiske og ikke-semantiske datakilder, ved at svare på udfordringerne
nævnt herover, 2) foreslår en mængde af ETL operationer til at behandle
semantisk data, 3) implementerer passende miljøer (både programmerbare
samt grafiske brugergrænseflader), for at lette ETL processer og evaluere den
foreslåede løsning. Vores ETL framework er et semantisk ETL framework,
fordi det integrerer data semantisk. Den følgende sektion forklarer vores
bidrag.
Vi foreslår SETL, et samlet framework for semantisk ETL. Frameworket
er splittet i tre lag: et definitions-lag, et ETL-lag, og et DW-lag. Det semanvii
tiske DW (SWD) skema, datakilder, samt sammenhængen mellem datakilder
og deres mål, er defineret i definitions-laget. I ETL-laget designes ETLprocesser
til at udfylde SDW fra datakilderne. DW-laget administrerer lagring
af transformerede semantiske data. Frameworket understøtter inkluderingen
af semantiske (RDF) data i DWs ud over relationelle data. Det giver
brugerne mulighed for at definere en ontologi for et DW og annotere med
MD-konstruktioner (såsom dimensioner, kuber, niveauer osv.) ved hjælp af
Data Cube til OLAP (QB4OLAP) ordforrådet. Det understøtter traditionelle
transformations operationer, og giver en metode til at generere semantiske
data fra de oprindelige data, i henhold til semantikken indkodet i ontologien.
Det muliggør også en metode til at forbinde interne SDW data med
eksterne vidensbaser. Herved skaber det en vidensbase, der er sammensat af
en ontologi og dets instanser, hvor data er semantisk forbundet med andre
eksterne / interne data. Vi udvikler et høj niveau Python-baseret programmerbart
framework for at udføre de ovennævnte opgaver. En omfattende
eksperimentel evaluering, der sammenligner SETL med en traditionel løsning
(hvilket krævede meget manuel kodning), om brugen af danske landbrugsog
forretnings datasæt, viser at SETL præsterer bedre, programmør produktivitet
og vidensbase kvalitet. Sammenligningen mellem SETL og Pentaho
Data Integration (PDI) ved behandling af en semantisk kilde viser, at SETL
er 13,5% hurtigere end PDI.
Udover SETL, foreslår vi SETLCONSTRUCT hvor vi definerer et sæt ETLoperationer
på højt niveau til behandling af semantiske datakilder. Vi deler
integrationsprocessen i to lag: Definitions-lag og eksekverings-lag. Definitionslaget
indeholder to opgaver, der giver DW designere muligheden for at definere
(SDW) skemaer, og kortlægningerne mellem kilder og målet. For
at oprette kortlægning mellem kilderne og målene, leverer vi et kortlægnings
ordforråd kaldet Source-to-Target Mapping (S2TMAP). Forskelligt fra
andre ETL-værktøjer foreslår vi et nyt paradigme: vi karakteriserer ETLflowtransformationerne
i definitions-laget i stedet for uafhængigt inden for
hver ETL-operation (i eksekverings-laget). På denne måde har designeren
et overblik over processen, som genererer metadata (kortlægningsfilen), som
ETL operatørerne vil læse og parametrisere automatisk. I eksekverings-laget
foreslår vi en mængde høj niveau ETL-operationer til at behandle semantiske
datakilder. Udover rensning, sammenføjning og datatypebaseret transformationer
af semantiske data, foreslår vi operationer til at generere multidimensionel
semantik på data-niveau og operationer til at opdatere et SDW for
at afspejle ændringer i kilde-dataen. Derudover udvider vi SETLCONSTRUCT
for at muliggøre automatisk ETL-eksekveringsstrømgenerering (vi kalder det
SETLAUTO). Endelig leverer vi en omfattende evaluering for at sammenligne
produktivitet, udviklingstid og ydeevne for scon og SETLAUTO med
den tidligere ramme SETL. Evalueringen viser, at SETLCONSTRUCT forbedres
markant i forhold til SETL med hensyn til produktivitet, udviklingstid og ydeevne. Evalueringen viser, at 1) SETLCONSTRUCT bruger 92% færre antal
indtastede tegn (NOTC) end SETL, og SETLAUTO reducerer antallet af brugte
begreber (NOUC) yderligere med 25%; 2) ved at bruge SETLCONSTRUCT, er
udviklingstiden næsten halveret sammenlignet med SETL, og skæres med
yderligere 27% ved hjælp af SETLAUTO; 3) SETLCONSTRUCT er skalerbar og
har lignende ydelse sammenlignet med SETL.
Til slut udvikler vi et GUI-baseret semantisk BI system SETLBI for at
definere, processere, integrere og lave forespørgsler på semantiske og ikkesemantiske
data. Ud over definitions-laget og ETL-laget, har SETLBI et
OLAP-lag, som giver en interaktiv grænseflade for at muliggøre selvbetjenings
OLAP analyser over det semantiske DW. Hvert lag er sammensat af en
mængde operationer/opgaver. Vi udarbejder en ontologi til at formalisere
intra-og ekstra-lags forbindelserne mellem komponenterne og lagene. ETLlaget
udvider eksekverings-laget af SETLCONSTUCT ved at tilføje operationer
til at behandle ikke-semantiske datakilder. Vi demonstrerer systemet ved
hjælp af Bangladesh population census 2011 datasættet.
Sammenfatningen af denne afhandling er BI-værktøjet SETLBI . SETLBI
fremmer (1) DW-designere med ringe / ingen SW-viden til semantisk at integrere
semantiske og / eller ikke-semantiske data og analysere det i OLAP
stil, og (2) SW brugere med grundlæggende MD-baggrund til at definere MDvisninger
over semantiske data, der aktiverer OLAP-lignende analyse. Derudover
kan SW-brugere berige det genererede SDW-skema med RDFS / OWLkonstruktioner.
Med udgangspunkt i frameworket som et grundlag kan
forskere sigte mod at udvikle yderligere interaktive og automatiske integrationsrammer
for SDW. Dette projekt bygger bro mellem de traditionelle BIteknologier
og SW-teknologier, som igen vil åbne døren for yderligere forskningsmuligheder
som at udvikle maskinforståelige ETL og lagerteknikker.Postprint (published version
A hyperconnected manufacturing collaboration system using the semantic web and Hadoop ecosystem system
With the explosive growth of digital data communications in synergistic operating networks and cloud computing service, hyperconnected
manufacturing collaboration systems face the challenges of extracting, processing, and analyzing data from multiple distributed web sources.
Although semantic web technologies provide the solution to web data interoperability by storing the semantic web standard in relational
databases for processing and analyzing of web-accessible heterogeneous digital data, web data storage and retrieval via the predefined schema
of relational / SQL databases has become increasingly inefficient with the advent of big data. In response to this problem, the Hadoop
Ecosystem System is being adopted to reduce the complexity of moving data to and from the big data cloud platform. This paper proposes a
novel approach in a set of the Hadoop tools for information integration and interoperability across hyperconnected manufacturing collaboration
systems. In the Hadoop approach, data is “Extracted” from the web sources, “Loaded” into a set of the NoSQL Hadoop Database (HBase)
tables, and then “Transformed” and integrated into the desired format model with Hive's schema-on-read. A case study was conducted to
illustrate that the Hadoop Extract-Load-Transform (ELT) approach for the syntax and semantics web data integration could be adopted across
the global smartphone value chain
SETL: A programmable semantic extract-transform-load framework for semantic data warehouses
In order to create better decisions for business analytics, organizations increasingly use external structured, semi-structured, and unstructured data in addition to the (mostly structured) internal data. Current Extract-Transform-Load (ETL) tools are not suitable for this “open world scenario” because they do not consider semantic issues in the integration processing. Current ETL tools neither support processing semantic data nor create a semantic Data Warehouse (DW), a repository of semantically integrated data. This paper describes our programmable Semantic ETL (SETL) framework. SETL builds on Semantic Web (SW) standards and tools and supports developers by offering a number of powerful modules, classes, and methods for (dimensional and semantic) DW constructs and tasks. Thus it supports semantic data sources in addition to traditional data sources, semantic integration, and creating or publishing a semantic (multidimensional) DW in terms of a knowledge base. A comprehensive experimental evaluation comparing SETL to a solution made with traditional tools (requiring much more hand-coding) on a concrete use case, shows that SETL provides better programmer productivity, knowledge base quality, and performance.Peer ReviewedPostprint (author's final draft
SOA enabled ELTA: approach in designing business intelligence solutions in Era of Big Data
The current work presents a new approach for designing business intelligence solutions. In the Era of Big Data, former and robust analytical concepts and utilities need to adapt themselves to the changed market circumstances. The main focus of this work is to address the acceleration of building process of a “data-centric” Business Intelligence (BI) solution besides preparing BI solutions for Big Data utilization. This research addresses the following goals: reducing the time spent during business intelligence solution’s design phase; achieving flexibility of BI solution by adding new data sources; and preparing BI solution for utilizing Big Data concepts. This research proposes an extension of the existing Extract, Load and Transform (ELT) approach to the new one Extract, Load, Transform and Analyze (ELTA) supported by service-orientation concept. Additionally, the proposed model incorporates Service-Oriented Architecture concept as a mediator for the transformation phase. On one side, such incorporation brings flexibility to the BI solution and on the other side; it reduces the complexity of the whole system by moving some responsibilities to external authorities
Big Data Mining and Semantic Technologies: Challenges and Opportunities
Big data a term coined due to the explosion in the quantity and diversity of high frequency digital data which is having a potential for valuable insights has drawn the most attention in the area of research and development. Converting big data to actionable insights requires depth understanding of big data, its characteristics, challenges and current technological trends. A rise of big data is changing the existing data storage, management, processing and analytical mechanisms and leads to the new architecture/ecosystems to handle big data applications. This paper covers finding of our research study about big data characteristic, various types of analysis associated with it and basic big data types. First, we are presenting the big data study from data mining and analysis perspective and discuss the challenges and next, we present the result of research study on meaningful use of big data in the context of semantic technologies. Moreover, we discuss various case studies related to social media analysis and recent development trends to identify potential research directions for big data with semantic technologies.
DOI: 10.17762/ijritcc2321-8169.150711
Dublin Smart City Data Integration, Analysis and Visualisation
Data is an important resource for any organisation, to understand the in-depth working and identifying the unseen trends with in the data. When this data is efficiently processed and analysed it helps the authorities to take appropriate decisions based on the derived insights and knowledge, through these decisions the service quality can be improved and enhance the customer experience. A massive growth in the data generation has been observed since two decades. The significant part of this generated data is generated from the dumb and smart sensors. If this raw data is processed in an efficient manner it could uplift the quality levels towards areas such as data mining, data analytics, business intelligence and data visualisation
Loss of information during design & construction for highways asset management: A geobim perspective
Modern cities will have a catalytic role in regulating global economic growth and development, highlighting their role as centers of economic activity. With urbanisation being a consequence of that, the built environment is pressured to withstand the rapid increase in demand of buildings as well as safe, resilient and sustainable transportation infrastructure. Transportation Infrastructure has a unique characteristic: it is interconnected and thus, it is essential for the stakeholders to be able to capture, analyse and visualise these interlinked relationships efficiently and effectively. This requirement is addressed by an Asset Information Management System (AIMS) which enables the capture of such information from the early stages of a transport infrastructure construction project. Building Information Modelling (BIM) and Geographic Information Science/Systems (GIS) are two domains which facilitate the authoring, management and exchange of asset information by providing the location underpinning, both in the short term and through the very long lifespan of the infrastructure. These systems are not interoperable by nature, with extensive Extract/Transform/Load procedures required when developing an integrated location-based Asset Management system, with consequent loss of information. The purpose of this paper is to provide an insight regarding the information lifecycle during Design and Construction on a Highways Project, focusing on identifying the stages in which loss of information can impact decision-making during operational Asset Management: (i) 3D Model to IFC, (ii) IFC to AIM and (iii) IFC to 3DGIS for AIM. The discussion highlights the significance of custom property sets and classification systems to bridge the different data structures as well as the power of 3D in visualizing Asset Information, with future work focusing on the potential of early BIM-GIS integration for operational AM
- …