3,736 research outputs found

    Semantic data mining and linked data for a recommender system in the AEC industry

    Get PDF
    Even though it can provide design teams with valuable performance insights and enhance decision-making, monitored building data is rarely reused in an effective feedback loop from operation to design. Data mining allows users to obtain such insights from the large datasets generated throughout the building life cycle. Furthermore, semantic web technologies allow to formally represent the built environment and retrieve knowledge in response to domain-specific requirements. Both approaches have independently established themselves as powerful aids in decision-making. Combining them can enrich data mining processes with domain knowledge and facilitate knowledge discovery, representation and reuse. In this article, we look into the available data mining techniques and investigate to what extent they can be fused with semantic web technologies to provide recommendations to the end user in performance-oriented design. We demonstrate an initial implementation of a linked data-based system for generation of recommendations

    Knowledge Organization Systems (KOS) in the Semantic Web: A Multi-Dimensional Review

    Full text link
    Since the Simple Knowledge Organization System (SKOS) specification and its SKOS eXtension for Labels (SKOS-XL) became formal W3C recommendations in 2009 a significant number of conventional knowledge organization systems (KOS) (including thesauri, classification schemes, name authorities, and lists of codes and terms, produced before the arrival of the ontology-wave) have made their journeys to join the Semantic Web mainstream. This paper uses "LOD KOS" as an umbrella term to refer to all of the value vocabularies and lightweight ontologies within the Semantic Web framework. The paper provides an overview of what the LOD KOS movement has brought to various communities and users. These are not limited to the colonies of the value vocabulary constructors and providers, nor the catalogers and indexers who have a long history of applying the vocabularies to their products. The LOD dataset producers and LOD service providers, the information architects and interface designers, and researchers in sciences and humanities, are also direct beneficiaries of LOD KOS. The paper examines a set of the collected cases (experimental or in real applications) and aims to find the usages of LOD KOS in order to share the practices and ideas among communities and users. Through the viewpoints of a number of different user groups, the functions of LOD KOS are examined from multiple dimensions. This paper focuses on the LOD dataset producers, vocabulary producers, and researchers (as end-users of KOS).Comment: 31 pages, 12 figures, accepted paper in International Journal on Digital Librarie

    Complex networks and public funding: the case of the 2007-2013 Italian program

    Get PDF
    In this paper we apply techniques of complex network analysis to data sources representing public funding programs and discuss the importance of the considered indicators for program evaluation. Starting from the Open Data repository of the 2007-2013 Italian Program Programma Operativo Nazionale 'Ricerca e Competitivit\`a' (PON R&C), we build a set of data models and perform network analysis over them. We discuss the obtained experimental results outlining interesting new perspectives that emerge from the application of the proposed methods to the socio-economical evaluation of funded programs.Comment: 22 pages, 9 figure

    Information system support in construction industry with semantic web technologies and/or autonomous reasoning agents

    Get PDF
    Information technology support is hard to find for the early design phases of the architectural design process. Many of the existing issues in such design decision support tools appear to be caused by a mismatch between the ways in which designers think and the ways in which information systems aim to give support. We therefore started an investigation of existing theories of design thinking, compared to the way in which design decision support systems provide information to the designer. We identify two main strategies towards information system support in the early design phase: (1) applications for making design try-outs, and (2) applications as autonomous reasoning agents. We outline preview implementations for both approaches and indicate to what extent these strategies can be used to improve information system support for the architectural designer

    Report of the Stanford Linked Data Workshop

    No full text
    The Stanford University Libraries and Academic Information Resources (SULAIR) with the Council on Library and Information Resources (CLIR) conducted at week-long workshop on the prospects for a large scale, multi-national, multi-institutional prototype of a Linked Data environment for discovery of and navigation among the rapidly, chaotically expanding array of academic information resources. As preparation for the workshop, CLIR sponsored a survey by Jerry Persons, Chief Information Architect emeritus of SULAIR that was published originally for workshop participants as background to the workshop and is now publicly available. The original intention of the workshop was to devise a plan for such a prototype. However, such was the diversity of knowledge, experience, and views of the potential of Linked Data approaches that the workshop participants turned to two more fundamental goals: building common understanding and enthusiasm on the one hand and identifying opportunities and challenges to be confronted in the preparation of the intended prototype and its operation on the other. In pursuit of those objectives, the workshop participants produced:1. a value statement addressing the question of why a Linked Data approach is worth prototyping;2. a manifesto for Linked Libraries (and Museums and Archives and …);3. an outline of the phases in a life cycle of Linked Data approaches;4. a prioritized list of known issues in generating, harvesting & using Linked Data;5. a workflow with notes for converting library bibliographic records and other academic metadata to URIs;6. examples of potential “killer apps” using Linked Data: and7. a list of next steps and potential projects.This report includes a summary of the workshop agenda, a chart showing the use of Linked Data in cultural heritage venues, and short biographies and statements from each of the participants

    SMART-KG: Hybrid Shipping for SPARQL Querying on the Web

    Get PDF
    While Linked Data (LD) provides standards for publishing (RDF) and (SPARQL) querying Knowledge Graphs (KGs) on the Web, serving, accessing and processing such open, decentralized KGs is often practically impossible, as query timeouts on publicly available SPARQL endpoints show. Alternative solutions such as Triple Pattern Fragments (TPF) attempt to tackle the problem of availability by pushing query processing workload to the client side, but suffer from unnecessary transfer of irrelevant data on complex queries with large intermediate results. In this paper we present smart-KG, a novel approach to share the load between servers and clients, while significantly reducing data transfer volume, by combining TPF with shipping compressed KG partitions. Our evaluations show that outperforms state-of-the-art client-side solutions and increases server-side availability towards more cost-effective and balanced hosting of open and decentralized KGs.Series: Working Papers on Information Systems, Information Business and Operation

    Knowledge-Driven Harmonization of Sensor Observations: Exploiting Linked Open Data for IoT Data Streams

    Get PDF
    The rise of the Internet of Things leads to an unprecedented number of continuous sensor observations that are available as IoT data streams. Harmonization of such observations is a labor-intensive task due to heterogeneity in format, syntax, and semantics. We aim to reduce the effort for such harmonization tasks by employing a knowledge-driven approach. To this end, we pursue the idea of exploiting the large body of formalized public knowledge represented as statements in Linked Open Data

    BIM and IoT Sensors Integration: A Framework for Consumption and Indoor Conditions Data Monitoring of Existing Buildings

    Get PDF
    The low accessibility to the information regarding buildings current performances causes deep difficulties in planning appropriate interventions. Internet of Things (IoT) sensors make available a high quantity of data on energy consumptions and indoor conditions of an existing building that can drive the choice of energy retrofit interventions. Moreover, the current developments in the topic of the digital twin are leading the diffusion of Building Information Modeling (BIM) methods and tools that can provide valid support to manage all data and information for the retrofit process. This paper shows the aim and the findings of research focused on testing the integrated use of BIM methodology and IoT systems. A common data platform for the visualization of building indoor conditions (e.g., temperature, luminance etc.) and of energy consumption parameters was carried out. This platform, tested on a case study located in Italy, is developed with the integration of low-cost IoT sensors and the Revit model. To obtain a dynamic and automated exchange of data between the sensors and the BIM model, the Revit software was integrated with the Dynamo visual programming platform and with a specific Application Programming Interface (API). It is an easy and straightforward tool that can provide building managers with real-time data and information about the energy consumption and the indoor conditions of buildings, but also allows for viewing of the historical sensor data table and creating graphical historical sensor data. Furthermore, the BIM model allows the management of other useful information about the building, such as dimensional data, functions, characteristics of the components of the building, maintenance status etc., which are essential for a much more conscious, effective and accurate management of the building and for defining the most suitable retrofit scenarios

    Design of a tool to drive improved supply chain planning decisions within the semiconductor industry

    Get PDF
    This is the Final Thesis wrote by Pol Aguirre Antonell within Infineon Technologies during the spring semester of 2023. The title of the Thesis is the Design of a tool to drive improved Supply Chain Planning decisions within the semiconductor Industry. We start of explaining the context to understand the necessity of agile planning tools and data analysis within the team. How the Supply Chain and the production of semiconductor products is key for a prosperous European semiconductor segment. To have a good understanding of the needs of the team and company, we study and present the Supply Chain Planning process of Infineon. Having this knowledge, we can start the process of creating the right and best tool to solve the knowledge gap between planning allocation and revenue impacts. To do so we benchmark the existing Data tools and solutions within Infineon Technologies, and we choose to create a Tableau Dashboard. This Dashboard will be connected to an automated data-source. Managed by a Tableau prep flow connected to the Data-lake through Cloudera Hadoop. Coding the Dashboard, it has also been an important practical part of this Thesis and it is presented in Chapter 6. Effective coding, seems simple, light, and easier to explain. Maybe you can explain that most complexity comes from the Waterfall chart whereas the rest requires less workarounds. Overall, we present the use of Tableau workarounds towards a lean dashboarding, and a one-stop dashboard. Lean Dashboarding is as important as having accurate data , and in Chapter 7 there is explanations on how and why the dashboard meets the design and interactivity requirements set by the different stakeholders and key users. To finalize, we study some business analysis that derives uniquely thanks to the report. In this las part we address how the team can integrate the findings from this analysis into their process. Overall, the thesis includes all steps of creating a Data Analysis tool and Dashboard. From idea to result

    Integrated HBIM-GIS Models for Multi-Scale Seismic Vulnerability Assessment of Historical Buildings

    Get PDF
    The complexity of historical urban centres progressively needs a strategic improvement in methods and the scale of knowledge concerning the vulnerability aspect of seismic risk. A geographical multi-scale point of view is increasingly preferred in the scientific literature and in Italian regulation policies, that considers systemic behaviors of damage and vulnerability assessment from an urban perspective according to the scale of the data, rather than single building damage analysis. In this sense, a geospatial data sciences approach can contribute towards generating, integrating, and making virtuous relations between urban databases and emergency-related data, in order to constitute a multi-scale 3D database supporting strategies for conservation and risk assessment scenarios. The proposed approach developed a vulnerability-oriented GIS/HBIM integration in an urban 3D geodatabase, based on multi-scale data derived from urban cartography and emergency mapping 3D data. Integrated geometric and semantic information related to historical masonry buildings (specifically the churches) and structural data about architectural elements and damage were integrated in the approach. This contribution aimed to answer the research question supporting levels of knowledge required by directives and vulnerability assessment studies, both about the generative workflow phase, the role of HBIM models in GIS environments and toward user-oriented webGIS solutions for sharing and public use fruition, exploiting the database for expert operators involved in heritage preservation
    corecore