12,111 research outputs found
Executable Models and Instance Tracking for Decentralized Applications on Blockchains and Cloud Platforms -- Metamodel and Implementation
Decentralized applications rely on non-centralized technical infrastructures
and coordination principles. Without trusted third parties, their execution is
not controlled by entities exercising centralized coordination but is instead
realized through technologies supporting distribution such as blockchains and
serverless computing. Executing decentralized applications with these
technologies, however, is challenging due to the limited transparency and
insight in the execution, especially when involving centralized cloud
platforms. This paper extends an approach for execution and instance tracking
on blockchains and cloud platforms permitting distributed parties to observe
the instances and states of executable models. The approach is extended with
(1.) a metamodel describing the concepts for instance tracking on cloud
platforms independent of concrete models or implementation, (2.) a
multidimensional data model realizing the concepts accordingly, permitting the
verifiable storage, tracking, and analysis of execution states for distributed
parties, and (3.) an implementation on the Ethereum blockchain and Amazon Web
Services (AWS) using state machine models. Towards supporting decentralized
applications with high scalability and distribution requirements, the approach
establishes a consistent view on instances for distributed parties to track and
analyze the execution along multiple dimensions such as specific clients and
execution engines.Comment: This is an unpublished preprint; both versions archived on arXiv.org
have not been published. Although initially intended for publication, the
preprint has undergone further improvements and has been utilized as input
for new publications. (see also:
https://www.unifr.ch/inf/digits/en/group/team/haerer.html
Knowledge Graph Building Blocks: An easy-to-use Framework for developing FAIREr Knowledge Graphs
Knowledge graphs and ontologies provide promising technical solutions for
implementing the FAIR Principles for Findable, Accessible, Interoperable, and
Reusable data and metadata. However, they also come with their own challenges.
Nine such challenges are discussed and associated with the criterion of
cognitive interoperability and specific FAIREr principles (FAIR + Explorability
raised) that they fail to meet. We introduce an easy-to-use, open source
knowledge graph framework that is based on knowledge graph building blocks
(KGBBs). KGBBs are small information modules for knowledge-processing, each
based on a specific type of semantic unit. By interrelating several KGBBs, one
can specify a KGBB-driven FAIREr knowledge graph. Besides implementing semantic
units, the KGBB Framework clearly distinguishes and decouples an internal
in-memory data model from data storage, data display, and data access/export
models. We argue that this decoupling is essential for solving many problems of
knowledge management systems. We discuss the architecture of the KGBB Framework
as we envision it, comprising (i) an openly accessible KGBB-Repository for
different types of KGBBs, (ii) a KGBB-Engine for managing and operating FAIREr
knowledge graphs (including automatic provenance tracking, editing changelog,
and versioning of semantic units); (iii) a repository for KGBB-Functions; (iv)
a low-code KGBB-Editor with which domain experts can create new KGBBs and
specify their own FAIREr knowledge graph without having to think about semantic
modelling. We conclude with discussing the nine challenges and how the KGBB
Framework provides solutions for the issues they raise. While most of what we
discuss here is entirely conceptual, we can point to two prototypes that
demonstrate the principle feasibility of using semantic units and KGBBs to
manage and structure knowledge graphs
Endogenous measures for contextualising large-scale social phenomena: a corpus-based method for mediated public discourse
This work presents an interdisciplinary methodology for developing endogenous measures of group membership through analysis of pervasive linguistic patterns in public discourse. Focusing on political discourse, this work critiques the conventional approach to the study of political participation, which is premised on decontextualised, exogenous measures to characterise groups. Considering the theoretical and empirical weaknesses of decontextualised approaches to large-scale social phenomena, this work suggests that contextualisation using endogenous measures might provide a complementary perspective to mitigate such weaknesses.
This work develops a sociomaterial perspective on political participation in mediated discourse as affiliatory action performed through language. While the affiliatory function of language is often performed consciously (such as statements of identity), this work is concerned with unconscious features (such as patterns in lexis and grammar). This work argues that pervasive patterns in such features that emerge through socialisation are resistant to change and manipulation, and thus might serve as endogenous measures of sociopolitical contexts, and thus of groups.
In terms of method, the work takes a corpus-based approach to the analysis of data from the Twitter messaging service whereby patterns in users’ speech are examined statistically in order to trace potential community membership. The method is applied in the US state of Michigan during the second half of 2018—6 November having been the date of midterm (i.e. non-Presidential) elections in the United States. The corpus is assembled from the original posts of 5,889 users, who are nominally geolocalised to 417 municipalities. These users are clustered according to pervasive language features. Comparing the linguistic clusters according to the municipalities they represent finds that there are regular sociodemographic differentials across clusters. This is understood as an indication of social structure, suggesting that endogenous measures derived from pervasive patterns in language may indeed offer a complementary, contextualised perspective on large-scale social phenomena
Integration of heterogeneous data sources and automated reasoning in healthcare and domotic IoT systems
In recent years, IoT technology has radically transformed many crucial industrial and service sectors such as healthcare. The multi-facets heterogeneity of the devices and the collected information provides important opportunities to develop innovative systems and services. However, the ubiquitous presence of data silos and the poor semantic interoperability in the IoT landscape constitute a significant obstacle in the pursuit of this goal. Moreover, achieving actionable knowledge from the collected data requires IoT information sources to be analysed using appropriate artificial intelligence techniques such as automated reasoning. In this thesis work, Semantic Web technologies have been investigated as an approach to address both the data integration and reasoning aspect in modern IoT systems. In particular, the contributions presented in this thesis are the following: (1) the IoT Fitness Ontology, an OWL ontology that has been developed in order to overcome the issue of data silos and enable semantic interoperability in the IoT fitness domain; (2) a Linked Open Data web portal for collecting and sharing IoT health datasets with the research community; (3) a novel methodology for embedding knowledge in rule-defined IoT smart home scenarios; and (4) a knowledge-based IoT home automation system that supports a seamless integration of heterogeneous devices and data sources
Modelling, Monitoring, Control and Optimization for Complex Industrial Processes
This reprint includes 22 research papers and an editorial, collected from the Special Issue "Modelling, Monitoring, Control and Optimization for Complex Industrial Processes", highlighting recent research advances and emerging research directions in complex industrial processes. This reprint aims to promote the research field and benefit the readers from both academic communities and industrial sectors
(De)constructing Machines as Critical Technical Practice
This paper discusses the role of technology under the framework of Critical Technical Practice specifically in the form of constructing artefacts and deconstructing tools in order to produce what Philip Agre would describe as ‘reflexive work of critique’ (Agre, 1997:155). By presenting the activities and methods used in the teaching and shaping of undergraduate courses, this paper aims to show how technical objects, such as data, datasets, application programming interfaces and machine learning models, can be considered as discursive subjects, demonstrating pedagogical understanding across fields. The courses operate in the humanities tradition and take critical technical practice as a didactic approach, insofar as software and data are understood and manipulated on an instrumental level, while encouraging critical engagement and embodied reflection that bridge the technical and social/cultural domains. Within this pedagogical approach, critical is not only understood as a paradigm of rationality or quantitative, data-driven argumentation, but as adopting a critical position – that is, to research and reflect on the social structures and cultural phenomena entangled with digital objects, bodies, tools, methods and software production. By embracing work-in-progress and reflexive exploration, we aim to extend the notion of critical technical practice by unfolding how (de)constructing machines can be achieved beyond thinking of technology as neutral instrumentalisation. The challenge is how to find a balance, not only as researchers but as educators, unfolding aspects of both formality and functionality as well as questioning and understanding technology at a discursive and critical level. We argue that learning technical practice in an educational setting is not an end, but rather a means to question existing technological structures and create further changes in socio-technical systems
BIM-GPT: a Prompt-Based Virtual Assistant Framework for BIM Information Retrieval
Efficient information retrieval (IR) from building information models (BIMs)
poses significant challenges due to the necessity for deep BIM knowledge or
extensive engineering efforts for automation. We introduce BIM-GPT, a
prompt-based virtual assistant (VA) framework integrating BIM and generative
pre-trained transformer (GPT) technologies to support NL-based IR. A prompt
manager and dynamic template generate prompts for GPT models, enabling
interpretation of NL queries, summarization of retrieved information, and
answering BIM-related questions. In tests on a BIM IR dataset, our approach
achieved 83.5% and 99.5% accuracy rates for classifying NL queries with no data
and 2% data incorporated in prompts, respectively. Additionally, we validated
the functionality of BIM-GPT through a VA prototype for a hospital building.
This research contributes to the development of effective and versatile VAs for
BIM IR in the construction industry, significantly enhancing BIM accessibility
and reducing engineering efforts and training data requirements for processing
NL queries.Comment: 35 pages, 15 figure
- …