3,230 research outputs found

    Semantic data mapping technology to solve semantic data problem on heterogeneity aspect

    Get PDF
    The diversity of applications developed with different programming languages, application/data architectures, database systems and representation of data/information leads to heterogeneity issues. One of the problem challenges in the problem of heterogeneity is about heterogeneity data in term of semantic aspect. The semantic aspect is about data that has the same name with different meaning or data that has a different name with the same meaning. The semantic data mapping process is the best solution in the current days to solve semantic data problem. There are many semantic data mapping technologies that have been used in recent years. This research aims to compare and analyze existing semantic data mapping technology using five criteria’s. After comparative and analytical process, this research provides recommendations of appropriate semantic data mapping technology based on several criteria’s. Furthermore, at the end of this research we apply the recommended semantic data mapping technology to be implemented with the real data in the specific application. The result of this research is the semantic data mapping file that contains all data structures in the application data source. This semantic data mapping file can be used to map, share and integrate with other semantic data mapping from other applications and can also be used to integrate with the ontology language

    Data mapping process to handle semantic data problem on student grading system

    Get PDF
    Many applications are developed on education domain. Information and data for each application are stored in distributed locations with different data representations on each database. This situation leads to heterogeneity at the level of integration data. Heterogeneity data may cause many problems. One major issue is about the semantic relationships data among applications on education domain, in which the learning data may have the same name but with a different meaning, or learning data that has a different name with same meaning. This paper discusses on semantic data mapping process to handle semantic relationships problem on education domain. There are two main parts in the semantic data mapping process. The first part is the semantic data mapping engine to produce data mapping language with turtle (.ttl) file format as a standard XML file schema, that can be used for Local Java Application using Jena Library and Triple Store. The Turtle file contains detail information about data schema of every application inside the database system. The second part is to provide D2R Server that can be accessed from outside environment using HTTP Protocol. This can be done using SPARQL Clients, Linked Data Clients (RDF Formats) and HTML Browser. To implement the semantic data process, this paper focuses on the student grading system in the learning environment of education domain. By following the proposed semantic data mapping process, the turtle file format is produced as a result of the first part of the process. Finally, this file is used to be combined and integrated with other turtle files in order to map and link with other data representation of other applications

    A systematic review of data quality issues in knowledge discovery tasks

    Get PDF
    Hay un gran crecimiento en el volumen de datos porque las organizaciones capturan permanentemente la cantidad colectiva de datos para lograr un mejor proceso de toma de decisiones. El desafío mas fundamental es la exploración de los grandes volúmenes de datos y la extracción de conocimiento útil para futuras acciones por medio de tareas para el descubrimiento del conocimiento; sin embargo, muchos datos presentan mala calidad. Presentamos una revisión sistemática de los asuntos de calidad de datos en las áreas del descubrimiento de conocimiento y un estudio de caso aplicado a la enfermedad agrícola conocida como la roya del café.Large volume of data is growing because the organizations are continuously capturing the collective amount of data for better decision-making process. The most fundamental challenge is to explore the large volumes of data and extract useful knowledge for future actions through knowledge discovery tasks, nevertheless many data has poor quality. We presented a systematic review of the data quality issues in knowledge discovery tasks and a case study applied to agricultural disease named coffee rust

    Innovating the Construction Life Cycle through BIM/GIS Integration: A Review

    Get PDF
    The construction sector is in continuous evolution due to the digitalisation and integration into daily activities of the building information modelling approach and methods that impact on the overall life cycle. This study investigates the topic of BIM/GIS integration with the adoption of ontologies and metamodels, providing a critical analysis of the existing literature. Ontologies and metamodels share several similarities and could be combined for potential solutions to address BIM/GIS integration for complex tasks, such as asset management, where heterogeneous sources of data are involved. The research adopts a systematic literature review (SLR), providing a formal approach to retrieve scientific papers from dedicated online databases. The results found are then analysed, in order to describe the state of the art and suggest future research paths, which is useful for both researchers and practitioners. From the SLR, it emerged that several studies address ontologies as a promising way to overcome the semantic barriers of the BIM/GIS integration. On the other hand, metamodels (and MDE and MDA approaches, in general) are rarely found in relation to the integration topic. Moreover, the joint application of ontologies and metamodels for BIM/GIS applications is an unexplored field. The novelty of this work is the proposal of the joint application of ontologies and metamodels to perform BIM/GIS integration, for the development of software and systems for asset management

    Open Standard, Open Source and Peer to Peer Methods for Collaborative Product Development and Knowledge Management

    Get PDF
    Tools such as product data management (PDM) and its offspring product lifecycle management (PLM) enable collaboration within and between enterprises. Large enterprises have invariably been the target of software vendors for development of such tools, resulting in large entralized applications. These are beyond the means of small to medium enterprises (SME). Even after these efforts had been made, large enterprises face numerous difficulties with PLM. Firstly, enterprises evolve, and an evolving enterprise needs an evolving data management system. With large applications, such configuration changes have to be made at the server level by dedicated staff. The second problem arises when enterprises wish to collaborate with a large number of suppliers and original equipment manufacturer (OEM) customers. Current applications enable collaboration using business-to-business (B2B) protocols. However, these do not take into account that disparate enterprises do not have unitary data models or workflows. This is a strong factor in reducing the abilities of large enterprises to participate in collaborative project

    Where does wearable technology fit in the Circular Economy?

    Get PDF
    Environmental concerns have become a core focus in today’s fashion and textile industry. Sustainability underlies all aspects of the industry from sourcing raw materials through design, manufacturing, consumer use and end-of-life disposal. Wearable electronics has emerged from a niche industry to one with an estimated market value of US20billionin2015andexpectedtorisetoUS20billion in 2015 and expected to rise to US70 billion by 2025 (Harrop, 2015). Although still a relatively immature industry, it is starting to recognise environmental concerns but thus far it has not become an industry driver. In this paper we first look at the current state of sustainability within wearable technology. In the second section we identify key drivers and issues then propose ways in which wearable technology can more fully embrace the Circular Economy. In the concluding section we look at future technologies and their likely environmental impact. As wearable technology has now started to mature all aspects of sustainability need to be addressed. We will look at lessons that can be taken and applied from the textile and fashion industry such as the sourcing, use, reuse and disposal of material. We will also examine issues unique to wearable technology for example the need for a power supply and the problem of technological obsolescence within the garment. From a design perspective we examine the ways in which wearable technology is applied within fashion and how this could more closely relate to the activity of garment use. From this position we then question whether it is possible for wearable technology to contribute to garment longevity by examining issues and concepts related to fashionability, durability, and repair. In the concluding portion of the paper we consider the introduction of future technologies and disruptive manufacturing processes that have the potential to provide challenges that demand design and manufacturing solutions that are both sustainable and innovative

    Forecasting obsolescence risk and product lifecycle with machine learning

    Get PDF
    Rapid changes in technology have led to an increasingly fast pace of product introductions. New components offering added functionality, improved performance and quality are routinely available to a growing number of industry sectors (e.g., electronics, automotive, and defense industries). For long-life systems such as planes, ships, nuclear power plants, and more, these rapid changes help sustain the useful life, but at the same time, present significant challenges associated with managing change. Obsolescence of components and/or subsystems can be technical, functional, related to style, etc., and occur in nearly any industry. Over the years, many approaches for forecasting obsolescence have been developed. Inputs to such methods have been based on manual inputs and best estimates from product planners, or have been based on market analysis of parts, components, or assemblies that have been identified as higher risk for obsolescence on bill of materials. Gathering inputs required for forecasting is often subjective and laborious, causing inconsistencies in predictions. To address this issue, the objective of this research is to develop a new framework and methodology capable of identifying and forecasting obsolescence with a high degree of accuracy while minimizing maintenance and upkeep. To accomplish this objective, current obsolescence forecasting methods were categorized by output type and assessed in terms of pros and cons. A machine learning methodology capable of predicting obsolescence risk level and estimating the date of obsolescence was developed. The machine learning methodology is used to classify parts as active (in production) or obsolete (discontinued) and can be used during the design stage to guide part selection. Estimates of the date parts will cease production can be used to more efficiently time redesigns of multiple obsolete parts from a product or system. A case study of the cell phone market is presented to demonstrate how the methodology can forecast product obsolescence with a high degree of accuracy. For example, results of obsolescence forecasting in the case study predict parts as active or obsolete with a 98.3% accuracy and regularly predicts obsolescence dates within a few months
    corecore