4,123 research outputs found

    CAPD: A Context-Aware, Policy-Driven Framework for Secure and Resilient IoBT Operations

    Full text link
    The Internet of Battlefield Things (IoBT) will advance the operational effectiveness of infantry units. However, this requires autonomous assets such as sensors, drones, combat equipment, and uncrewed vehicles to collaborate, securely share information, and be resilient to adversary attacks in contested multi-domain operations. CAPD addresses this problem by providing a context-aware, policy-driven framework supporting data and knowledge exchange among autonomous entities in a battlespace. We propose an IoBT ontology that facilitates controlled information sharing to enable semantic interoperability between systems. Its key contributions include providing a knowledge graph with a shared semantic schema, integration with background knowledge, efficient mechanisms for enforcing data consistency and drawing inferences, and supporting attribute-based access control. The sensors in the IoBT provide data that create populated knowledge graphs based on the ontology. This paper describes using CAPD to detect and mitigate adversary actions. CAPD enables situational awareness using reasoning over the sensed data and SPARQL queries. For example, adversaries can cause sensor failure or hijacking and disrupt the tactical networks to degrade video surveillance. In such instances, CAPD uses an ontology-based reasoner to see how alternative approaches can still support the mission. Depending on bandwidth availability, the reasoner initiates the creation of a reduced frame rate grayscale video by active transcoding or transmits only still images. This ability to reason over the mission sensed environment and attack context permits the autonomous IoBT system to exhibit resilience in contested conditions

    Unified System on Chip RESTAPI Service (USOCRS)

    Get PDF
    Abstract. This thesis investigates the development of a Unified System on Chip RESTAPI Service (USOCRS) to enhance the efficiency and effectiveness of SOC verification reporting. The research aims to overcome the challenges associated with the transfer, utilization, and interpretation of SoC verification reports by creating a unified platform that integrates various tools and technologies. The research methodology used in this study follows a design science approach. A thorough literature review was conducted to explore existing approaches and technologies related to SOC verification reporting, automation, data visualization, and API development. The review revealed gaps in the current state of the field, providing a basis for further investigation. Using the insights gained from the literature review, a system design and implementation plan were developed. This plan makes use of cutting-edge technologies such as FASTAPI, SQL and NoSQL databases, Azure Active Directory for authentication, and Cloud services. The Verification Toolbox was employed to validate SoC reports based on the organization’s standards. The system went through manual testing, and user satisfaction was evaluated to ensure its functionality and usability. The results of this study demonstrate the successful design and implementation of the USOCRS, offering SOC engineers a unified and secure platform for uploading, validating, storing, and retrieving verification reports. The USOCRS facilitates seamless communication between users and the API, granting easy access to vital information including successes, failures, and test coverage derived from submitted SoC verification reports. By automating and standardizing the SOC verification reporting process, the USOCRS eliminates manual and repetitive tasks usually done by developers, thereby enhancing productivity, and establishing a robust and reliable framework for report storage and retrieval. Through the integration of diverse tools and technologies, the USOCRS presents a comprehensive solution that adheres to the required specifications of the SOC schema used within the organization. Furthermore, the USOCRS significantly improves the efficiency and effectiveness of SOC verification reporting. It facilitates the submission process, reduces latency through optimized data storage, and enables meaningful extraction and analysis of report data

    Developing a Digital Twin at Building and City Levels: A Case Study of West Cambridge Campus

    Get PDF
    A digital twin (DT) refers to a digital replica of physical assets, processes, and systems. DTs integrate artificial intelligence, machine learning, and data analytics to create living digital simulation models that are able to learn and update from multiple sources as well as represent and predict the current and future conditions of physical counterparts. However, current activities related to DTs are still at an early stage with respect to buildings and other infrastructure assets from an architectural and engineering/construction point of view. Less attention has been paid to the operation and maintenance (O&M) phase, which is the longest time span in the asset life cycle. A systematic and clear architecture verified with practical use cases for constructing a DT would be the foremost step for effective operation and maintenance of buildings and cities. According to current research about multitier architectures, this paper presents a system architecture for DTs that is specifically designed at both the building and city levels. Based on this architecture, a DT demonstrator of the West Cambridge site of the University of Cambridge in the UK was developed that integrates heterogeneous data sources, supports effective data querying and analysis, supports decision-making processes in O&M management, and further bridges the gap between human relationships with buildings/cities. This paper aims at going through the whole process of developing DTs in building and city levels from the technical perspective and sharing lessons learned and challenges involved in developing DTs in real practices. Through developing this DT demonstrator, the results provide a clear roadmap and present particular DT research efforts for asset management practitioners, policymakers, and researchers to promote the implementation and development of DT at the building and city levels

    A Smart Products Lifecycle Management (sPLM) Framework - Modeling for Conceptualization, Interoperability, and Modularity

    Get PDF
    Autonomy and intelligence have been built into many of today’s mechatronic products, taking advantage of low-cost sensors and advanced data analytics technologies. Design of product intelligence (enabled by analytics capabilities) is no longer a trivial or additional option for the product development. The objective of this research is aimed at addressing the challenges raised by the new data-driven design paradigm for smart products development, in which the product itself and the smartness require to be carefully co-constructed. A smart product can be seen as specific compositions and configurations of its physical components to form the body, its analytics models to implement the intelligence, evolving along its lifecycle stages. Based on this view, the contribution of this research is to expand the “Product Lifecycle Management (PLM)” concept traditionally for physical products to data-based products. As a result, a Smart Products Lifecycle Management (sPLM) framework is conceptualized based on a high-dimensional Smart Product Hypercube (sPH) representation and decomposition. First, the sPLM addresses the interoperability issues by developing a Smart Component data model to uniformly represent and compose physical component models created by engineers and analytics models created by data scientists. Second, the sPLM implements an NPD3 process model that incorporates formal data analytics process into the new product development (NPD) process model, in order to support the transdisciplinary information flows and team interactions between engineers and data scientists. Third, the sPLM addresses the issues related to product definition, modular design, product configuration, and lifecycle management of analytics models, by adapting the theoretical frameworks and methods for traditional product design and development. An sPLM proof-of-concept platform had been implemented for validation of the concepts and methodologies developed throughout the research work. The sPLM platform provides a shared data repository to manage the product-, process-, and configuration-related knowledge for smart products development. It also provides a collaborative environment to facilitate transdisciplinary collaboration between product engineers and data scientists

    Semantically defined Analytics for Industrial Equipment Diagnostics

    Get PDF
    In this age of digitalization, industries everywhere accumulate massive amount of data such that it has become the lifeblood of the global economy. This data may come from various heterogeneous systems, equipment, components, sensors, systems and applications in many varieties (diversity of sources), velocities (high rate of changes) and volumes (sheer data size). Despite significant advances in the ability to collect, store, manage and filter data, the real value lies in the analytics. Raw data is meaningless, unless it is properly processed to actionable (business) insights. Those that know how to harness data effectively, have a decisive competitive advantage, through raising performance by making faster and smart decisions, improving short and long-term strategic planning, offering more user-centric products and services and fostering innovation. Two distinct paradigms in practice can be discerned within the field of analytics: semantic-driven (deductive) and data-driven (inductive). The first emphasizes logic as a way of representing the domain knowledge encoded in rules or ontologies and are often carefully curated and maintained. However, these models are often highly complex, and require intensive knowledge processing capabilities. Data-driven analytics employ machine learning (ML) to directly learn a model from the data with minimal human intervention. However, these models are tuned to trained data and context, making it difficult to adapt. Industries today that want to create value from data must master these paradigms in combination. However, there is great need in data analytics to seamlessly combine semantic-driven and data-driven processing techniques in an efficient and scalable architecture that allows extracting actionable insights from an extreme variety of data. In this thesis, we address these needs by providing: • A unified representation of domain-specific and analytical semantics, in form of ontology models called TechOnto Ontology Stack. It is highly expressive, platform-independent formalism to capture conceptual semantics of industrial systems such as technical system hierarchies, component partonomies etc and its analytical functional semantics. • A new ontology language Semantically defined Analytical Language (SAL) on top of the ontology model that extends existing DatalogMTL (a Horn fragment of Metric Temporal Logic) with analytical functions as first class citizens. • A method to generate semantic workflows using our SAL language. It helps in authoring, reusing and maintaining complex analytical tasks and workflows in an abstract fashion. • A multi-layer architecture that fuses knowledge- and data-driven analytics into a federated and distributed solution. To our knowledge, the work in this thesis is one of the first works to introduce and investigate the use of the semantically defined analytics in an ontology-based data access setting for industrial analytical applications. The reason behind focusing our work and evaluation on industrial data is due to (i) the adoption of semantic technology by the industries in general, and (ii) the common need in literature and in practice to allow domain expertise to drive the data analytics on semantically interoperable sources, while still harnessing the power of analytics to enable real-time data insights. Given the evaluation results of three use-case studies, our approach surpass state-of-the-art approaches for most application scenarios.Im Zeitalter der Digitalisierung sammeln die Industrien überall massive Daten-mengen, die zum Lebenselixier der Weltwirtschaft geworden sind. Diese Daten können aus verschiedenen heterogenen Systemen, Geräten, Komponenten, Sensoren, Systemen und Anwendungen in vielen Varianten (Vielfalt der Quellen), Geschwindigkeiten (hohe Änderungsrate) und Volumina (reine Datengröße) stammen. Trotz erheblicher Fortschritte in der Fähigkeit, Daten zu sammeln, zu speichern, zu verwalten und zu filtern, liegt der eigentliche Wert in der Analytik. Rohdaten sind bedeutungslos, es sei denn, sie werden ordnungsgemäß zu verwertbaren (Geschäfts-)Erkenntnissen verarbeitet. Wer weiß, wie man Daten effektiv nutzt, hat einen entscheidenden Wettbewerbsvorteil, indem er die Leistung steigert, indem er schnellere und intelligentere Entscheidungen trifft, die kurz- und langfristige strategische Planung verbessert, mehr benutzerorientierte Produkte und Dienstleistungen anbietet und Innovationen fördert. In der Praxis lassen sich im Bereich der Analytik zwei unterschiedliche Paradigmen unterscheiden: semantisch (deduktiv) und Daten getrieben (induktiv). Die erste betont die Logik als eine Möglichkeit, das in Regeln oder Ontologien kodierte Domänen-wissen darzustellen, und wird oft sorgfältig kuratiert und gepflegt. Diese Modelle sind jedoch oft sehr komplex und erfordern eine intensive Wissensverarbeitung. Datengesteuerte Analysen verwenden maschinelles Lernen (ML), um mit minimalem menschlichen Eingriff direkt ein Modell aus den Daten zu lernen. Diese Modelle sind jedoch auf trainierte Daten und Kontext abgestimmt, was die Anpassung erschwert. Branchen, die heute Wert aus Daten schaffen wollen, müssen diese Paradigmen in Kombination meistern. Es besteht jedoch ein großer Bedarf in der Daten-analytik, semantisch und datengesteuerte Verarbeitungstechniken nahtlos in einer effizienten und skalierbaren Architektur zu kombinieren, die es ermöglicht, aus einer extremen Datenvielfalt verwertbare Erkenntnisse zu gewinnen. In dieser Arbeit, die wir auf diese Bedürfnisse durch die Bereitstellung: • Eine einheitliche Darstellung der Domänen-spezifischen und analytischen Semantik in Form von Ontologie Modellen, genannt TechOnto Ontology Stack. Es ist ein hoch-expressiver, plattformunabhängiger Formalismus, die konzeptionelle Semantik industrieller Systeme wie technischer Systemhierarchien, Komponenten-partonomien usw. und deren analytische funktionale Semantik zu erfassen. • Eine neue Ontologie-Sprache Semantically defined Analytical Language (SAL) auf Basis des Ontologie-Modells das bestehende DatalogMTL (ein Horn fragment der metrischen temporären Logik) um analytische Funktionen als erstklassige Bürger erweitert. • Eine Methode zur Erzeugung semantischer workflows mit unserer SAL-Sprache. Es hilft bei der Erstellung, Wiederverwendung und Wartung komplexer analytischer Aufgaben und workflows auf abstrakte Weise. • Eine mehrschichtige Architektur, die Wissens- und datengesteuerte Analysen zu einer föderierten und verteilten Lösung verschmilzt. Nach unserem Wissen, die Arbeit in dieser Arbeit ist eines der ersten Werke zur Einführung und Untersuchung der Verwendung der semantisch definierten Analytik in einer Ontologie-basierten Datenzugriff Einstellung für industrielle analytische Anwendungen. Der Grund für die Fokussierung unserer Arbeit und Evaluierung auf industrielle Daten ist auf (i) die Übernahme semantischer Technologien durch die Industrie im Allgemeinen und (ii) den gemeinsamen Bedarf in der Literatur und in der Praxis zurückzuführen, der es der Fachkompetenz ermöglicht, die Datenanalyse auf semantisch inter-operablen Quellen voranzutreiben, und nutzen gleichzeitig die Leistungsfähigkeit der Analytik, um Echtzeit-Daten-einblicke zu ermöglichen. Aufgrund der Evaluierungsergebnisse von drei Anwendungsfällen Übertritt unser Ansatz für die meisten Anwendungsszenarien Modernste Ansätze

    Back-end reference architecture for smart water meter data gathering service

    Get PDF
    Abstract. The Finnish waterworks industry is on the brink of digitalization. Currently, many of them have started to convert their water meters to smart water meters. However, there is yet no suitable solution for gathering the IoT data from these smart water meters. To answer their arising needs, many pilots and workshops have been conducted. Those pilots have yielded some basic ground rules for their use cases. In this study, those ground rules have been gathered as a set of requirement categories. The categories are studied and analyzed in order to establish a reference architecture for IoT data-gathering systems suitable for waterworks. Using the requirements and the reference architecture, an information system, Dataservice, was implemented by Vesitieto Oy. The system gathers the IoT data and visualizes it to waterworks’ employees. The System was deployed in Microsoft’s cloud service, but other cloud vendors were examined as well. The system has a two-folded database system, the data required by the system, like users and user groups, are held in the SQL database. The IoT-data is held in a NoSQL database. The selected NoSQL database provider was MongoDB as it could be integrated with the cloud provider.Etäluettavien vesimittareiden datapalvelun viitearkkitehtuuri. Tiivistelmä. Suomen vesihuolto on digitalisaation partaalla. Tällä hetkellä monet vesilaitokset ovat alkaneet vaihtaa vanhoja analogisia vesimittareitaan älykkäiksi vesimittareiksi. Vesihuoltolaitokset eivät kuitenkaan ole löytäneet kaikille sopivaa ratkaisua IoT-tiedon keräämiseen älykkäistä vesimittareista. Vastatakseen vesilaitosten tarpeisiin, monia pilotteja ja työpajoja on järjestetty eri yhteistyökumppaneiden kanssa. Näistä eri piloteista on muodostunut käsitys siitä, kuinka vesimittareiden digitalisaatio voidaan ratkaista vesilaitoksilla. Tässä tutkimuksessa eri laitosten väliset perussäännöt on koottu ohjelmistovaatimusluokiksi. Näitä luokkia tutkitaan ja analysoidaan vesilaitoksille sopivan IoT-tiedonkeruujärjestelmän viitearkkitehtuurin luomiseksi. Vaatimuksia ja viitearkkitehtuuria hyödyntäen Vesitieto Oy toteutti tietojärjestelmän nimeltään ”Dataservice”. Järjestelmä kerää IoT-tiedot ja visualisoi ne vesilaitosten työntekijöille. Järjestelmä otettiin käyttöön Microsoftin pilvipalvelussa, mutta myös muita pilvipalvelun palveluntarjoajia tutkittiin. Järjestelmässä on kaksiportainen tietokantajärjestelmä. Järjestelmän tarvitsemat tiedot kuten käyttäjät sekä käyttäjäryhmät pidetään SQLtietokannassa ja IoT-tiedot pidetään NoSQL-tietokannassa. Valittu NoSQL tietokantajärjestelmä oli MongoDB, koska se voitiin integroida pilvipalveluntarjoajan kanssa

    A Knowledge Graph Based Integration Approach for Industry 4.0

    Get PDF
    The fourth industrial revolution, Industry 4.0 (I40) aims at creating smart factories employing among others Cyber-Physical Systems (CPS), Internet of Things (IoT) and Artificial Intelligence (AI). Realizing smart factories according to the I40 vision requires intelligent human-to-machine and machine-to-machine communication. To achieve this communication, CPS along with their data need to be described and interoperability conflicts arising from various representations need to be resolved. For establishing interoperability, industry communities have created standards and standardization frameworks. Standards describe main properties of entities, systems, and processes, as well as interactions among them. Standardization frameworks classify, align, and integrate industrial standards according to their purposes and features. Despite being published by official international organizations, different standards may contain divergent definitions for similar entities. Further, when utilizing the same standard for the design of a CPS, different views can generate interoperability conflicts. Albeit expressive, standardization frameworks may represent divergent categorizations of the same standard to some extent, interoperability conflicts need to be resolved to support effective and efficient communication in smart factories. To achieve interoperability, data need to be semantically integrated and existing conflicts conciliated. This problem has been extensively studied in the literature. Obtained results can be applied to general integration problems. However, current approaches fail to consider specific interoperability conflicts that occur between entities in I40 scenarios. In this thesis, we tackle the problem of semantic data integration in I40 scenarios. A knowledge graphbased approach allowing for the integration of entities in I40 while considering their semantics is presented. To achieve this integration, there are challenges to be addressed on different conceptual levels. Firstly, defining mappings between standards and standardization frameworks; secondly, representing knowledge of entities in I40 scenarios described by standards; thirdly, integrating perspectives of CPS design while solving semantic heterogeneity issues; and finally, determining real industry applications for the presented approach. We first devise a knowledge-driven approach allowing for the integration of standards and standardization frameworks into an Industry 4.0 knowledge graph (I40KG). The standards ontology is used for representing the main properties of standards and standardization frameworks, as well as relationships among them. The I40KG permits to integrate standards and standardization frameworks while solving specific semantic heterogeneity conflicts in the domain. Further, we semantically describe standards in knowledge graphs. To this end, standards of core importance for I40 scenarios are considered, i.e., the Reference Architectural Model for I40 (RAMI4.0), AutomationML, and the Supply Chain Operation Reference Model (SCOR). In addition, different perspectives of entities describing CPS are integrated into the knowledge graphs. To evaluate the proposed methods, we rely on empirical evaluations as well as on the development of concrete use cases. The attained results provide evidence that a knowledge graph approach enables the effective data integration of entities in I40 scenarios while solving semantic interoperability conflicts, thus empowering the communication in smart factories

    SUTMS - Unified Threat Management Framework for Home Networks

    Get PDF
    Home networks were initially designed for web browsing and non-business critical applications. As infrastructure improved, internet broadband costs decreased, and home internet usage transferred to e-commerce and business-critical applications. Today’s home computers host personnel identifiable information and financial data and act as a bridge to corporate networks via remote access technologies like VPN. The expansion of remote work and the transition to cloud computing have broadened the attack surface for potential threats. Home networks have become the extension of critical networks and services, hackers can get access to corporate data by compromising devices attacked to broad- band routers. All these challenges depict the importance of home-based Unified Threat Management (UTM) systems. There is a need of unified threat management framework that is developed specifically for home and small networks to address emerging security challenges. In this research, the proposed Smart Unified Threat Management (SUTMS) framework serves as a comprehensive solution for implementing home network security, incorporating firewall, anti-bot, intrusion detection, and anomaly detection engines into a unified system. SUTMS is able to provide 99.99% accuracy with 56.83% memory improvements. IPS stands out as the most resource-intensive UTM service, SUTMS successfully reduces the performance overhead of IDS by integrating it with the flow detection mod- ule. The artifact employs flow analysis to identify network anomalies and categorizes encrypted traffic according to its abnormalities. SUTMS can be scaled by introducing optional functions, i.e., routing and smart logging (utilizing Apriori algorithms). The research also tackles one of the limitations identified by SUTMS through the introduction of a second artifact called Secure Centralized Management System (SCMS). SCMS is a lightweight asset management platform with built-in security intelligence that can seamlessly integrate with a cloud for real-time updates
    corecore