833 research outputs found

    A study of existing Ontologies in the IoT-domain

    Get PDF
    Several domains have adopted the increasing use of IoT-based devices to collect sensor data for generating abstractions and perceptions of the real world. This sensor data is multi-modal and heterogeneous in nature. This heterogeneity induces interoperability issues while developing cross-domain applications, thereby restricting the possibility of reusing sensor data to develop new applications. As a solution to this, semantic approaches have been proposed in the literature to tackle problems related to interoperability of sensor data. Several ontologies have been proposed to handle different aspects of IoT-based sensor data collection, ranging from discovering the IoT sensors for data collection to applying reasoning on the collected sensor data for drawing inferences. In this paper, we survey these existing semantic ontologies to provide an overview of the recent developments in this field. We highlight the fundamental ontological concepts (e.g., sensor-capabilities and context-awareness) required for an IoT-based application, and survey the existing ontologies which include these concepts. Based on our study, we also identify the shortcomings of currently available ontologies, which serves as a stepping stone to state the need for a common unified ontology for the IoT domain.Comment: Submitted to Elsevier JWS SI on Web semantics for the Internet/Web of Thing

    Evaluate Various Techniques of Data Warehouse and Data Mining with Web Based Tool

    Get PDF
    All enterprise has a crucial role to play proficiently and productively to maintain its survival in the market and increase its profitability shares. This challenge becomes more complicated with advancement in information technology along with increasing volume and complexity of information. Currently, success of an enterprise is not just the result of efforts by resources but also depends upon its ability to mine the data from the stored information. Data warehousing is a compilation of decision making procedure to integrate and manage the large variant data efficiently and scientifically. Data mining shores up organizations, scrutinize their data more effectively and proficiently to achieve valuable information, that can reward an intelligent and strategic decision making. Data mining has several techniques and maths algorithms which are used to mine large data to increase the organization performance and strategic decision-making. Clustering is a powerful and widely accepted data mining method used to segregate the large data sets into group of similar objects and provides to the end user a sophisticated view of database. This study discusses the basic concept of clustering; its meaning and applications, especially in business for division and selection of target market. This technique is useful in marketing or sales side and, for example, sends a promotion to the right target for that product or service. Association is a known data mining techniques. A pattern is inferred based on an affiliation between matter of same business transaction. It is also referred as relation technique. Large enterprises depend on this technique to research customer's buying preferences. For instance, to track people's buying behavior, retailers might categorize that a customer always buy sambar onion when they buy dal, and therefore suggest that the next time that they buy dal they might also want to buy onion. Classification – it is one of the data mining concept differs from the above in a way it is used on machine learning and makes use of techniques used in maths such as linear programming, decision trees, neural network. In classification, enterprises try to build tool that can learn how to classify the data items into groups. For instance, a company can define a classification in the application that “given all records of employees who offered to resign from the company, predict the number of individuals who are likely to resign from the company in future.” Under such a scenario, the company can classify the records of employees into two groups that namely “separate” and “retain”. It can use its data mining software to classify the employees into separate groups created earlier. Fuzzy logic resembles human reasoning greatly in handling of imperfect information and can be used as a flexibility tool for soften the boundaries in classification that suits the real problems more efficiently. The present study discusses the meaning of fuzzy logic, its applications and different features. A tool to be build to check data mining algorithms and algorithm behind the model, apply clustering method as a sample in tool to select the training data out of the large data base and reduce complexity and time while computing. K-nearest neighbor method can be used in many applications from general to specific to find the requested data out of huge data. Decision trees – A decision tree is a structure that includes a root node, branches, and leaf nodes. Every one interior node signify a test on an attribute, each branch denotes the result of a test, and each leaf node represents a class label. The topmost node in the tree is the root node. Within the decision tree, we start with a simple question that has multiple answers. Each respond show the way to a further query to help classify or identify the data so that it can be categorized, or so that a prediction can be made based on each answer. Regression analysis is the data mining method of identifying and analyzing the relationship between variables. It is used to identify the likelihood of a specific variable, given the presence of other variables. Outlier detection technique refers to observation of data items in the dataset which do not match an expected pattern or expected behaviour. This technique can be used in a variety of domains, such as intrusion, detection, fraud or fault detection, etc. Outer detection is also called Outlier Analysis or Outlier mining. Sequential Patterns technique helps to find out similar patterns or trends in transaction data for definite period

    Flowstats: an ontology based network management tool

    Get PDF
    One of the problems that hinders large scale network management tasks is the number of possible heterogeneous data sources that provide network information and how to focus on a desired network segment without requiring a deep knowledge of the network structure. This work investigates how to intelligently and efficiently refine and manage a vast amount of network monitoring data sources, by using artificial intelligent reasoning through an intuitive user interface. We aim to minimise the user interaction and required user knowledge when searching for the desired network monitoring information by refining the presented information based on user choices. The concept of Ontology is utilised to create a knowledge base of multiple different aspects of our testbed: Internal Management structure, Physical Location of data sources, and network switch meta-data

    Geospatial Data Management Research: Progress and Future Directions

    Get PDF
    Without geospatial data management, today´s challenges in big data applications such as earth observation, geographic information system/building information modeling (GIS/BIM) integration, and 3D/4D city planning cannot be solved. Furthermore, geospatial data management plays a connecting role between data acquisition, data modelling, data visualization, and data analysis. It enables the continuous availability of geospatial data and the replicability of geospatial data analysis. In the first part of this article, five milestones of geospatial data management research are presented that were achieved during the last decade. The first one reflects advancements in BIM/GIS integration at data, process, and application levels. The second milestone presents theoretical progress by introducing topology as a key concept of geospatial data management. In the third milestone, 3D/4D geospatial data management is described as a key concept for city modelling, including subsurface models. Progress in modelling and visualization of massive geospatial features on web platforms is the fourth milestone which includes discrete global grid systems as an alternative geospatial reference framework. The intensive use of geosensor data sources is the fifth milestone which opens the way to parallel data storage platforms supporting data analysis on geosensors. In the second part of this article, five future directions of geospatial data management research are presented that have the potential to become key research fields of geospatial data management in the next decade. Geo-data science will have the task to extract knowledge from unstructured and structured geospatial data and to bridge the gap between modern information technology concepts and the geo-related sciences. Topology is presented as a powerful and general concept to analyze GIS and BIM data structures and spatial relations that will be of great importance in emerging applications such as smart cities and digital twins. Data-streaming libraries and “in-situ” geo-computing on objects executed directly on the sensors will revolutionize geo-information science and bridge geo-computing with geospatial data management. Advanced geospatial data visualization on web platforms will enable the representation of dynamically changing geospatial features or moving objects’ trajectories. Finally, geospatial data management will support big geospatial data analysis, and graph databases are expected to experience a revival on top of parallel and distributed data stores supporting big geospatial data analysis

    Utilizing industry 4.0 on the construction site : challenges and opportunities

    Get PDF
    In recent years a step change has been seen in the rate of adoption of Industry 4.0 technologies by manufacturers and industrial organisations alike. This paper discusses the current state of the art in the adoption of industry 4.0 technologies within the construction industry. Increasing complexity in onsite construction projects coupled with the need for higher productivity is leading to increased interest in the potential use of industry 4.0 technologies. This paper discusses the relevance of the following key industry 4.0 technologies to construction: data analytics and artificial intelligence; robotics and automation; buildings information management; sensors and wearables; digital twin and industrial connectivity. Industrial connectivity is a key aspect as it ensures that all Industry 4.0 technologies are interconnected allowing the full benefits to be realized. This paper also presents a research agenda for the adoption of Industry 4.0 technologies within the construction sector; a three-phase use of intelligent assets from the point of manufacture up to after build and a four staged R&D process for the implementation of smart wearables in a digital enhanced construction site

    When Things Matter: A Data-Centric View of the Internet of Things

    Full text link
    With the recent advances in radio-frequency identification (RFID), low-cost wireless sensor devices, and Web technologies, the Internet of Things (IoT) approach has gained momentum in connecting everyday objects to the Internet and facilitating machine-to-human and machine-to-machine communication with the physical world. While IoT offers the capability to connect and integrate both digital and physical entities, enabling a whole new class of applications and services, several significant challenges need to be addressed before these applications and services can be fully realized. A fundamental challenge centers around managing IoT data, typically produced in dynamic and volatile environments, which is not only extremely large in scale and volume, but also noisy, and continuous. This article surveys the main techniques and state-of-the-art research efforts in IoT from data-centric perspectives, including data stream processing, data storage models, complex event processing, and searching in IoT. Open research issues for IoT data management are also discussed

    Developing a dynamic digital twin at a building level: Using Cambridge campus as case study

    Get PDF
    A Digital Twin (DT) refers to a digital replica of physical assets, processes and systems. DTs integrate artificial intelligence, machine learning and data analytics to create dynamic digital models that are able to learn and update the status of the physical counterpart from multiple sources. A DT, if equipped with appropriate algorithms will represent and predict future condition and performance of their physical counterparts. Current developments related to DTs are still at an early stage with respect to buildings and other infrastructure assets. Most of these developments focus on the architectural and engineering/construction point of view. Less attention has been paid to the operation & maintenance (O&M) phase, where the value potential is immense. A systematic and clear architecture verified with practical use cases for constructing a DT is the foremost step for effective operation and maintenance of assets. This paper presents a system architecture for developing dynamic DTs in building levels for integrating heterogeneous data sources, support intelligent data query, and provide smarter decision-making processes. This will further bridge the gaps between human relationships with buildings/regions via a more intelligent, visual and sustainable channels. This architecture is brought to life through the development of a dynamic DT demonstrator of the West Cambridge site of the University of Cambridge. Specifically, this demonstrator integrates an as-is multi-layered IFC Building Information Model (BIM), building management system data, space management data, real-time Internet of Things (IoT)-based sensor data, asset registry data, and an asset tagging platform. The demonstrator also includes two applications: (1) improving asset maintenance and asset tracking using Augmented Reality (AR); and (2) equipment failure prediction. The long-term goals of this demonstrator are also discussed in this paper

    Integration of decision support systems to improve decision support performance

    Get PDF
    Decision support system (DSS) is a well-established research and development area. Traditional isolated, stand-alone DSS has been recently facing new challenges. In order to improve the performance of DSS to meet the challenges, research has been actively carried out to develop integrated decision support systems (IDSS). This paper reviews the current research efforts with regard to the development of IDSS. The focus of the paper is on the integration aspect for IDSS through multiple perspectives, and the technologies that support this integration. More than 100 papers and software systems are discussed. Current research efforts and the development status of IDSS are explained, compared and classified. In addition, future trends and challenges in integration are outlined. The paper concludes that by addressing integration, better support will be provided to decision makers, with the expectation of both better decisions and improved decision making processes
    corecore