12,559 research outputs found

    Graduate Catalog of Studies, 2023-2024

    Get PDF

    INTEGRATION OF DYNAMIC INFORMATION ON ENERGY PARAMETERS IN HBIM MODELS

    Get PDF
    The conservation of cultural heritage can be affected by different changes in temperature and humidity within architectural spaces, so energy performance and interior microclimate of historic buildings require adaptation to new maintenance and prevention studies. The search for these new investigations brings cultural heritage closer to new digital technologies such as Historic Building Information Modelling (HBIM). In this work, a new interdisciplinary methodology is developed between energy operators and BIM operators, so that a new framework is created to monitor energy parameters through intelligent sensors that measure temperature and humidity in the fully interoperable and semantically enriched 3D model itself. The study's commitment involves solving the interoperability workflow between sensors and the BIM platform, taking advantage of this new interconnectivity. For the study, a methodology applied to the Church of the Sacred Heart of Jesus in Seville was carried out, where from a survey through a georeferenced terrestrial laser scanner with topographic equipment, it is modelled from the point cloud, incorporating the sensors in the HBIM Project. In the workflow, it has been shown that the integration of microclimate data inside churches can be managed directly in the environment of an HBIM-based model and transfer a reverse flow in the process

    Towards A Practical High-Assurance Systems Programming Language

    Full text link
    Writing correct and performant low-level systems code is a notoriously demanding job, even for experienced developers. To make the matter worse, formally reasoning about their correctness properties introduces yet another level of complexity to the task. It requires considerable expertise in both systems programming and formal verification. The development can be extremely costly due to the sheer complexity of the systems and the nuances in them, if not assisted with appropriate tools that provide abstraction and automation. Cogent is designed to alleviate the burden on developers when writing and verifying systems code. It is a high-level functional language with a certifying compiler, which automatically proves the correctness of the compiled code and also provides a purely functional abstraction of the low-level program to the developer. Equational reasoning techniques can then be used to prove functional correctness properties of the program on top of this abstract semantics, which is notably less laborious than directly verifying the C code. To make Cogent a more approachable and effective tool for developing real-world systems, we further strengthen the framework by extending the core language and its ecosystem. Specifically, we enrich the language to allow users to control the memory representation of algebraic data types, while retaining the automatic proof with a data layout refinement calculus. We repurpose existing tools in a novel way and develop an intuitive foreign function interface, which provides users a seamless experience when using Cogent in conjunction with native C. We augment the Cogent ecosystem with a property-based testing framework, which helps developers better understand the impact formal verification has on their programs and enables a progressive approach to producing high-assurance systems. Finally we explore refinement type systems, which we plan to incorporate into Cogent for more expressiveness and better integration of systems programmers with the verification process

    A Machine Learning based Empirical Evaluation of Cyber Threat Actors High Level Attack Patterns over Low level Attack Patterns in Attributing Attacks

    Full text link
    Cyber threat attribution is the process of identifying the actor of an attack incident in cyberspace. An accurate and timely threat attribution plays an important role in deterring future attacks by applying appropriate and timely defense mechanisms. Manual analysis of attack patterns gathered by honeypot deployments, intrusion detection systems, firewalls, and via trace-back procedures is still the preferred method of security analysts for cyber threat attribution. Such attack patterns are low-level Indicators of Compromise (IOC). They represent Tactics, Techniques, Procedures (TTP), and software tools used by the adversaries in their campaigns. The adversaries rarely re-use them. They can also be manipulated, resulting in false and unfair attribution. To empirically evaluate and compare the effectiveness of both kinds of IOC, there are two problems that need to be addressed. The first problem is that in recent research works, the ineffectiveness of low-level IOC for cyber threat attribution has been discussed intuitively. An empirical evaluation for the measure of the effectiveness of low-level IOC based on a real-world dataset is missing. The second problem is that the available dataset for high-level IOC has a single instance for each predictive class label that cannot be used directly for training machine learning models. To address these problems in this research work, we empirically evaluate the effectiveness of low-level IOC based on a real-world dataset that is specifically built for comparative analysis with high-level IOC. The experimental results show that the high-level IOC trained models effectively attribute cyberattacks with an accuracy of 95% as compared to the low-level IOC trained models where accuracy is 40%.Comment: 20 page

    Knowledge Graph Building Blocks: An easy-to-use Framework for developing FAIREr Knowledge Graphs

    Full text link
    Knowledge graphs and ontologies provide promising technical solutions for implementing the FAIR Principles for Findable, Accessible, Interoperable, and Reusable data and metadata. However, they also come with their own challenges. Nine such challenges are discussed and associated with the criterion of cognitive interoperability and specific FAIREr principles (FAIR + Explorability raised) that they fail to meet. We introduce an easy-to-use, open source knowledge graph framework that is based on knowledge graph building blocks (KGBBs). KGBBs are small information modules for knowledge-processing, each based on a specific type of semantic unit. By interrelating several KGBBs, one can specify a KGBB-driven FAIREr knowledge graph. Besides implementing semantic units, the KGBB Framework clearly distinguishes and decouples an internal in-memory data model from data storage, data display, and data access/export models. We argue that this decoupling is essential for solving many problems of knowledge management systems. We discuss the architecture of the KGBB Framework as we envision it, comprising (i) an openly accessible KGBB-Repository for different types of KGBBs, (ii) a KGBB-Engine for managing and operating FAIREr knowledge graphs (including automatic provenance tracking, editing changelog, and versioning of semantic units); (iii) a repository for KGBB-Functions; (iv) a low-code KGBB-Editor with which domain experts can create new KGBBs and specify their own FAIREr knowledge graph without having to think about semantic modelling. We conclude with discussing the nine challenges and how the KGBB Framework provides solutions for the issues they raise. While most of what we discuss here is entirely conceptual, we can point to two prototypes that demonstrate the principle feasibility of using semantic units and KGBBs to manage and structure knowledge graphs

    CSM-H-R: An Automatic Context Reasoning Framework for Interoperable Intelligent Systems and Privacy Protection

    Full text link
    Automation of High-Level Context (HLC) reasoning for intelligent systems at scale is imperative due to the unceasing accumulation of contextual data in the IoT era, the trend of the fusion of data from multi-sources, and the intrinsic complexity and dynamism of the context-based decision-making process. To mitigate this issue, we propose an automatic context reasoning framework CSM-H-R, which programmatically combines ontologies and states at runtime and the model-storage phase for attaining the ability to recognize meaningful HLC, and the resulting data representation can be applied to different reasoning techniques. Case studies are developed based on an intelligent elevator system in a smart campus setting. An implementation of the framework - a CSM Engine, and the experiments of translating the HLC reasoning into vector and matrix computing especially take care of the dynamic aspects of context and present the potentiality of using advanced mathematical and probabilistic models to achieve the next level of automation in integrating intelligent systems; meanwhile, privacy protection support is achieved by anonymization through label embedding and reducing information correlation. The code of this study is available at: https://github.com/songhui01/CSM-H-R.Comment: 11 pages, 8 figures, Keywords: Context Reasoning, Automation, Intelligent Systems, Context Modeling, Context Dynamism, Privacy Protection, Context Sharing, Interoperability, System Integratio

    Integration of MLOps with IoT edge

    Get PDF
    Abstract. Edge Computing and Machine Learning have become increasingly vital in today’s digital landscape. Edge computing brings computational power closer to the data source enabling reduced latency and bandwith, increased privacy, and real-time decision-making. Running Machine Learning models on edge devices further enhances these advantages by reducing the reliance on cloud. This empowers industries such as transport, healthcare, manufacturing, to harness the full potential of Machine Learning. MLOps, or Machine Learning Operations play a major role streamlining the deployment, monitoring, and management of Machine Learning models in production. With MLOps, organisations can achieve faster model iteration, reduced deployment time, improved collaboration with developers, optimised performance, and ultimately meaningful business outcomes. Integrating MLOps with edge devices poses unique challenges. Overcoming these challenges requires careful planning, customised deployment strategies, and efficient model optimization techniques. This thesis project introduces a set of tools that enable the integration of MLOps practices with edge devices. The solution consists of two sets of tools: one for setting up infrastructure within edge devices to be able to receive, monitor, and run inference on Machine Learning models, and another for MLOps pipelines to package models to be compatible with the inference and monitoring components of the respective edge devices. This platform was evaluated by obtaining a public dataset used for predicting the breakdown of Air Pressure Systems in trucks, which is an ideal use-case for running ML inference on the edge, and connecting MLOps pipelines with edge devices.. A simulation was created using the data in order to control the volume of data flow into edge devices. Thereafter, the performance of the platform was tested against the scenario created by the simulation script. Response time and CPU usage in different components were the metrics that were tested. Additionally, the platform was evaluated against a set of commercial and open source tools and services that serve similar purposes. The overall performance of this solution matches that of already existing tools and services, while allowing end users setting up Edge-MLOps infrastructure the complete freedom to set up their system without completely relying on third party licensed software.MLOps-integraatio reunalaskennan tarpeisiin. Tiivistelmä. Reunalaskennasta (Edge Computing) ja koneoppimisesta on tullut yhä tärkeämpiä nykypäivän digitaalisessa ympäristössä. Reunalaskenta tuo laskentatehon lähemmäs datalähdettä, mikä mahdollistaa reaaliaikaisen päätöksenteon ja pienemmän viiveen. Koneoppimismallien suorittaminen reunalaitteissa parantaa näitä etuja entisestään vähentämällä riippuvuutta pilvipalveluista. Näin esimerkiksi liikenne-, terveydenhuolto- ja valmistusteollisuus voivat hyödyntää koneoppimisen koko potentiaalin. MLOps eli Machine Learning Operations on merkittävässä asemassa tehostettaessa ML -mallien käyttöönottoa, seurantaa ja hallintaa tuotannossa. MLOpsin avulla organisaatiot voivat nopeuttaa mallien iterointia, lyhentää käyttöönottoaikaa, parantaa yhteistyötä kehittäjien kesken, optimoida laskennan suorituskykyä ja lopulta saavuttaa merkityksellisiä liiketoimintatuloksia. MLOpsin integroiminen reunalaitteisiin asettaa ainutlaatuisia haasteita. Näiden haasteiden voittaminen edellyttää huolellista suunnittelua, räätälöityjä käyttöönottostrategioita ja tehokkaita mallien optimointitekniikoita. Tässä opinnäytetyöhankkeessa esitellään joukko työkaluja, jotka mahdollistavat MLOps-käytäntöjen integroinnin reunalaitteisiin. Ratkaisu koostuu kahdesta työkalukokonaisuudesta: toinen infrastruktuurin perustamisesta reunalaitteisiin, jotta ne voivat vastaanottaa, valvoa ja suorittaa päätelmiä koneoppimismalleista, ja toinen MLOps “prosesseista”, joilla mallit paketoidaan yhteensopiviksi vastaavien reunalaitteiden komponenttien kanssa. Ratkaisun toimivuutta arvioitiin avoimeen dataan perustuvalla käyttötapauksella. Datan avulla luotiin simulaatio, jonka tarkoituksena oli mahdollistaa reunalaitteisiin suuntautuvan datatovirran kontrollonti. Tämän jälkeen suorituskykyä testattiin simuloinnin luoman skenaarion avulla. Testattaviin mittareihin kuuluivat muun muassa suorittimen käyttö. Lisäksi ratkaisua arvioitiin vertaamalla sitä olemassa oleviin kaupallisiin ja avoimen lähdekoodin alustoihin. Tämän ratkaisun kokonaissuorituskyky vastaa jo markkinoilla olevien työkalujen ja palvelujen suorituskykyä. Ratkaisu antaa samalla loppukäyttäjille mahdollisuuden perustaa Edge-MLOps-infrastruktuuri ilman riippuvuutta kolmannen osapuolen lisensoiduista ohjelmistoista

    Utilizing artificial intelligence in perioperative patient flow:systematic literature review

    Get PDF
    Abstract. The purpose of this thesis was to map the existing landscape of artificial intelligence (AI) applications used in secondary healthcare, with a focus on perioperative care. The goal was to find out what systems have been developed, and how capable they are at controlling perioperative patient flow. The review was guided by the following research question: How is AI currently utilized in patient flow management in the context of perioperative care? This systematic literature review examined the current evidence regarding the use of AI in perioperative patient flow. A comprehensive search was conducted in four databases, resulting in 33 articles meeting the inclusion criteria. Findings demonstrated that AI technologies, such as machine learning (ML) algorithms and predictive analytics tools, have shown somewhat promising outcomes in optimizing perioperative patient flow. Specifically, AI systems have proven effective in predicting surgical case durations, assessing risks, planning treatments, supporting diagnosis, improving bed utilization, reducing cancellations and delays, and enhancing communication and collaboration among healthcare providers. However, several challenges were identified, including the need for accurate and reliable data sources, ethical considerations, and the potential for biased algorithms. Further research is needed to validate and optimize the application of AI in perioperative patient flow. The contribution of this thesis is summarizing the current state of the characteristics of AI application in perioperative patient flow. This systematic literature review provides information about the features of perioperative patient flow and the clinical tasks of AI applications previously identified

    Unified System on Chip RESTAPI Service (USOCRS)

    Get PDF
    Abstract. This thesis investigates the development of a Unified System on Chip RESTAPI Service (USOCRS) to enhance the efficiency and effectiveness of SOC verification reporting. The research aims to overcome the challenges associated with the transfer, utilization, and interpretation of SoC verification reports by creating a unified platform that integrates various tools and technologies. The research methodology used in this study follows a design science approach. A thorough literature review was conducted to explore existing approaches and technologies related to SOC verification reporting, automation, data visualization, and API development. The review revealed gaps in the current state of the field, providing a basis for further investigation. Using the insights gained from the literature review, a system design and implementation plan were developed. This plan makes use of cutting-edge technologies such as FASTAPI, SQL and NoSQL databases, Azure Active Directory for authentication, and Cloud services. The Verification Toolbox was employed to validate SoC reports based on the organization’s standards. The system went through manual testing, and user satisfaction was evaluated to ensure its functionality and usability. The results of this study demonstrate the successful design and implementation of the USOCRS, offering SOC engineers a unified and secure platform for uploading, validating, storing, and retrieving verification reports. The USOCRS facilitates seamless communication between users and the API, granting easy access to vital information including successes, failures, and test coverage derived from submitted SoC verification reports. By automating and standardizing the SOC verification reporting process, the USOCRS eliminates manual and repetitive tasks usually done by developers, thereby enhancing productivity, and establishing a robust and reliable framework for report storage and retrieval. Through the integration of diverse tools and technologies, the USOCRS presents a comprehensive solution that adheres to the required specifications of the SOC schema used within the organization. Furthermore, the USOCRS significantly improves the efficiency and effectiveness of SOC verification reporting. It facilitates the submission process, reduces latency through optimized data storage, and enables meaningful extraction and analysis of report data
    corecore