9,673 research outputs found

    Knowledge Graph Building Blocks: An easy-to-use Framework for developing FAIREr Knowledge Graphs

    Full text link
    Knowledge graphs and ontologies provide promising technical solutions for implementing the FAIR Principles for Findable, Accessible, Interoperable, and Reusable data and metadata. However, they also come with their own challenges. Nine such challenges are discussed and associated with the criterion of cognitive interoperability and specific FAIREr principles (FAIR + Explorability raised) that they fail to meet. We introduce an easy-to-use, open source knowledge graph framework that is based on knowledge graph building blocks (KGBBs). KGBBs are small information modules for knowledge-processing, each based on a specific type of semantic unit. By interrelating several KGBBs, one can specify a KGBB-driven FAIREr knowledge graph. Besides implementing semantic units, the KGBB Framework clearly distinguishes and decouples an internal in-memory data model from data storage, data display, and data access/export models. We argue that this decoupling is essential for solving many problems of knowledge management systems. We discuss the architecture of the KGBB Framework as we envision it, comprising (i) an openly accessible KGBB-Repository for different types of KGBBs, (ii) a KGBB-Engine for managing and operating FAIREr knowledge graphs (including automatic provenance tracking, editing changelog, and versioning of semantic units); (iii) a repository for KGBB-Functions; (iv) a low-code KGBB-Editor with which domain experts can create new KGBBs and specify their own FAIREr knowledge graph without having to think about semantic modelling. We conclude with discussing the nine challenges and how the KGBB Framework provides solutions for the issues they raise. While most of what we discuss here is entirely conceptual, we can point to two prototypes that demonstrate the principle feasibility of using semantic units and KGBBs to manage and structure knowledge graphs

    Colour technologies for content production and distribution of broadcast content

    Get PDF
    The requirement of colour reproduction has long been a priority driving the development of new colour imaging systems that maximise human perceptual plausibility. This thesis explores machine learning algorithms for colour processing to assist both content production and distribution. First, this research studies colourisation technologies with practical use cases in restoration and processing of archived content. The research targets practical deployable solutions, developing a cost-effective pipeline which integrates the activity of the producer into the processing workflow. In particular, a fully automatic image colourisation paradigm using Conditional GANs is proposed to improve content generalisation and colourfulness of existing baselines. Moreover, a more conservative solution is considered by providing references to guide the system towards more accurate colour predictions. A fast-end-to-end architecture is proposed to improve existing exemplar-based image colourisation methods while decreasing the complexity and runtime. Finally, the proposed image-based methods are integrated into a video colourisation pipeline. A general framework is proposed to reduce the generation of temporal flickering or propagation of errors when such methods are applied frame-to-frame. The proposed model is jointly trained to stabilise the input video and to cluster their frames with the aim of learning scene-specific modes. Second, this research explored colour processing technologies for content distribution with the aim to effectively deliver the processed content to the broad audience. In particular, video compression is tackled by introducing a novel methodology for chroma intra prediction based on attention models. Although the proposed architecture helped to gain control over the reference samples and better understand the prediction process, the complexity of the underlying neural network significantly increased the encoding and decoding time. Therefore, aiming at efficient deployment within the latest video coding standards, this work also focused on the simplification of the proposed architecture to obtain a more compact and explainable model

    Endogenous measures for contextualising large-scale social phenomena: a corpus-based method for mediated public discourse

    Get PDF
    This work presents an interdisciplinary methodology for developing endogenous measures of group membership through analysis of pervasive linguistic patterns in public discourse. Focusing on political discourse, this work critiques the conventional approach to the study of political participation, which is premised on decontextualised, exogenous measures to characterise groups. Considering the theoretical and empirical weaknesses of decontextualised approaches to large-scale social phenomena, this work suggests that contextualisation using endogenous measures might provide a complementary perspective to mitigate such weaknesses. This work develops a sociomaterial perspective on political participation in mediated discourse as affiliatory action performed through language. While the affiliatory function of language is often performed consciously (such as statements of identity), this work is concerned with unconscious features (such as patterns in lexis and grammar). This work argues that pervasive patterns in such features that emerge through socialisation are resistant to change and manipulation, and thus might serve as endogenous measures of sociopolitical contexts, and thus of groups. In terms of method, the work takes a corpus-based approach to the analysis of data from the Twitter messaging service whereby patterns in users’ speech are examined statistically in order to trace potential community membership. The method is applied in the US state of Michigan during the second half of 2018—6 November having been the date of midterm (i.e. non-Presidential) elections in the United States. The corpus is assembled from the original posts of 5,889 users, who are nominally geolocalised to 417 municipalities. These users are clustered according to pervasive language features. Comparing the linguistic clusters according to the municipalities they represent finds that there are regular sociodemographic differentials across clusters. This is understood as an indication of social structure, suggesting that endogenous measures derived from pervasive patterns in language may indeed offer a complementary, contextualised perspective on large-scale social phenomena

    The determinants of value addition: a crtitical analysis of global software engineering industry in Sri Lanka

    Get PDF
    It was evident through the literature that the perceived value delivery of the global software engineering industry is low due to various facts. Therefore, this research concerns global software product companies in Sri Lanka to explore the software engineering methods and practices in increasing the value addition. The overall aim of the study is to identify the key determinants for value addition in the global software engineering industry and critically evaluate the impact of them for the software product companies to help maximise the value addition to ultimately assure the sustainability of the industry. An exploratory research approach was used initially since findings would emerge while the study unfolds. Mixed method was employed as the literature itself was inadequate to investigate the problem effectively to formulate the research framework. Twenty-three face-to-face online interviews were conducted with the subject matter experts covering all the disciplines from the targeted organisations which was combined with the literature findings as well as the outcomes of the market research outcomes conducted by both government and nongovernment institutes. Data from the interviews were analysed using NVivo 12. The findings of the existing literature were verified through the exploratory study and the outcomes were used to formulate the questionnaire for the public survey. 371 responses were considered after cleansing the total responses received for the data analysis through SPSS 21 with alpha level 0.05. Internal consistency test was done before the descriptive analysis. After assuring the reliability of the dataset, the correlation test, multiple regression test and analysis of variance (ANOVA) test were carried out to fulfil the requirements of meeting the research objectives. Five determinants for value addition were identified along with the key themes for each area. They are staffing, delivery process, use of tools, governance, and technology infrastructure. The cross-functional and self-organised teams built around the value streams, employing a properly interconnected software delivery process with the right governance in the delivery pipelines, selection of tools and providing the right infrastructure increases the value delivery. Moreover, the constraints for value addition are poor interconnection in the internal processes, rigid functional hierarchies, inaccurate selections and uses of tools, inflexible team arrangements and inadequate focus for the technology infrastructure. The findings add to the existing body of knowledge on increasing the value addition by employing effective processes, practices and tools and the impacts of inaccurate applications the same in the global software engineering industry

    DIN Spec 91345 RAMI 4.0 compliant data pipelining: An approach to support data understanding and data acquisition in smart manufacturing environments

    Get PDF
    Today, data scientists in the manufacturing domain are confronted with a set of challenges associated to data acquisition as well as data processing including the extraction of valuable in-formation to support both, the work of the manufacturing equipment as well as the manufacturing processes behind it. One essential aspect related to data acquisition is the pipelining, including various commu-nication standards, protocols and technologies to save and transfer heterogenous data. These circumstances make it hard to understand, find, access and extract data from the sources depend-ing on use cases and applications. In order to support this data pipelining process, this thesis proposes the use of the semantic model. The selected semantic model should be able to describe smart manufacturing assets them-selves as well as to access their data along their life-cycle. As a matter of fact, there are many research contributions in smart manufacturing, which already came out with reference architectures or standards for semantic-based meta data descrip-tion or asset classification. This research builds upon these outcomes and introduces a novel se-mantic model-based data pipelining approach using as a basis the Reference Architecture Model for Industry 4.0 (RAMI 4.0).Hoje em dia, os cientistas de dados no domínio da manufatura são confrontados com várias normas, protocolos e tecnologias de comunicação para gravar, processar e transferir vários tipos de dados. Estas circunstâncias tornam difícil compreender, encontrar, aceder e extrair dados necessários para aplicações dependentes de casos de utilização, desde os equipamentos aos respectivos processos de manufatura. Um aspecto essencial poderia ser um processo de canalisação de dados incluindo vários normas de comunicação, protocolos e tecnologias para gravar e transferir dados. Uma solução para suporte deste processo, proposto por esta tese, é a aplicação de um modelo semântico que descreva os próprios recursos de manufactura inteligente e o acesso aos seus dados ao longo do seu ciclo de vida. Muitas das contribuições de investigação em manufatura inteligente já produziram arquitecturas de referência como a RAMI 4.0 ou normas para a descrição semântica de meta dados ou classificação de recursos. Esta investigação baseia-se nestas fontes externas e introduz um novo modelo semântico baseado no Modelo de Arquitectura de Referência para Indústria 4.0 (RAMI 4.0), em conformidade com a abordagem de canalisação de dados no domínio da produção inteligente como caso exemplar de utilização para permitir uma fácil exploração, compreensão, descoberta, selecção e extracção de dados

    Analyzing Usage Conflict Situations in Localized Spectrum Sharing Scenarios: An Agent-Based Modeling and Machine Learning Approach

    Get PDF
    As spectrum sharing matures, different approaches have been proposed for a more efficient allocation, assignment, and usage of spectrum resources. These approaches include cognitive radios, multi-level user definitions, radio environment maps, among others. However, spectrum usage conflicts (e.g., "harmful" interference) remain a common challenge in spectrum sharing schemes. In particular, in conflict situations where it is necessary to take actions to ensure the sound operations of sharing agreements. A typical example of a usage conflict is where incumbents' tolerable levels of interference (i.e., interference thresholds) are surpassed. In this work, we present a new method to examine and study spectrum usage conflicts. A fundamental goal of this project is to capture local resource usage patterns to provide more realistic estimates of interference. For this purpose, we have defined two spectrum and network-specific characteristics that directly impact the local interference assessment: resource access strategy and governance framework. Thus, we are able to test the viability in spectrum sharing situations of distributed or decentralized governance systems, including polycentric and self-governance. In addition, we are able to design, model, and test a multi-tier spectrum sharing scheme that provides stakeholders with more flexible resource access opportunities. To perform this dynamic and localized study of spectrum usage and conflicts, we rely on Agent-Based Modeling (ABM) as our main analysis instrument. A crucial component for capturing local resource usage patterns is to provide agents with local information about their spectrum situation. Thus, the environment of the models presented in this dissertation are given by the REM's Interference Cartography (IC) map. Additionally, the agents' definitions and actions are the results of the interaction of the technical aspects of resource access and management, stakeholder interactions, and the underlying usage patterns as defined in the Common Pool Resource (CPR) literature. Finally, to capture local resource usage patterns and, consequently, provide more realistic estimates of conflict situations, we enhance the classical rule-based ABM approach by using Machine Learning (ML) techniques. Via ML algorithms, we refine the internal models of agents in an ABM. Thus, the agents' internal models allow them to choose more suitable responses to changes in the environment

    Recent trends in non-invasive neural recording based brain-to-brain synchrony analysis on multidisciplinary human interactions for understanding brain dynamics: a systematic review

    Get PDF
    The study of brain-to-brain synchrony has a burgeoning application in the brain-computer interface (BCI) research, offering valuable insights into the neural underpinnings of interacting human brains using numerous neural recording technologies. The area allows exploring the commonality of brain dynamics by evaluating the neural synchronization among a group of people performing a specified task. The growing number of publications on brain-to-brain synchrony inspired the authors to conduct a systematic review using the PRISMA protocol so that future researchers can get a comprehensive understanding of the paradigms, methodologies, translational algorithms, and challenges in the area of brain-to-brain synchrony research. This review has gone through a systematic search with a specified search string and selected some articles based on pre-specified eligibility criteria. The findings from the review revealed that most of the articles have followed the social psychology paradigm, while 36% of the selected studies have an application in cognitive neuroscience. The most applied approach to determine neural connectivity is a coherence measure utilizing phase-locking value (PLV) in the EEG studies, followed by wavelet transform coherence (WTC) in all of the fNIRS studies. While most of the experiments have control experiments as a part of their setup, a small number implemented algorithmic control, and only one study had interventional or a stimulus-induced control experiment to limit spurious synchronization. Hence, to the best of the authors' knowledge, this systematic review solely contributes to critically evaluating the scopes and technological advances of brain-to-brain synchrony to allow this discipline to produce more effective research outcomes in the remote future

    Public librarians' perspectives of digital library for rural areas of Capricorn District Municipality, Limpopo Province

    Get PDF
    Even after two decades of a constitutional democracy that is prized as one of the most progressive achievements, public libraries and information services remain scarce in rural South Africa. This is despite the fact that Library and Information Services (LIS) are important to build and develop communities and foster enlightenment among citizens, but in direct contradiction to the rights of free education and the right to information enshrined in the Constitution. On the one hand, the establishment of new libraries in rural areas moves at inexcusable slower pace, while in the same vein, the population and communities continue to grow rapidly, with consequent demand for LIS. However, Information and Communication Technologies (ICTs) have positively impacted the library landscape and society at large. This transformation requires a shift from traditional ways of information provision to modern library services, namely, digital libraries. This study intended to investigate public librarians' perspectives of digital library for rural areas of Capricorn District Municipality (CDM), Limpopo Province. Moreover, to understand the nature of digital libraries and requirements for access and effective use of such libraries. A further objective is to ascertain whether digital libraries could be a solution for inaccessible LIS in rural areas. The study employed qualitative research approach through interpretive paradigm to investigate the perspectives of public librarians. The study adopted phenomenological research design. DeLone and McLean's Information System Success Model was adopted to frame the study. The population of the study was a total of the twenty-three public librarians with various titles from the CDM employed by the Department of Sport, Arts and Culture and the Local Municipalities. A purposive sampling technique was employed. The sample of the study included five librarians from various public libraries. Data was collected through semi-structured interviews and was analysed thematically. The study findings revealed that the digital libraries are not to replace the physical libraries, but to improve the LIS. It was found that digital library users require ICT tools of which some rural users may not afford. Moreover, users have varying preferences on the format of the information source, some may need printers to convert the digital information to print format adding financial implications on the rural user. The study indicated that basic computer literacy skills are central for access and use of digital library services, no advanced training is necessary. However, self-training might be sufficient for the use of digital library system, denoting expectation of a usable digital library system. The study revealed that the youth are expected to use the digital library services than other age groups, as youth are arguably conversant with internet technologies. The study found out that some librarians do digitise some of their heavily used materials to cater many users as they flock to the library, for instance, curriculum books. However, copyright laws might be overlooked or not taken cognisance of. Based on the findings, it was recommended that the current traditional libraries should operate as hybrid to provide LIS to enable users with no gadgets and other access challenges. Digitisation equipment, reliable internet, well-trained personnel are seen to be aspects of significance for digital library system. Moreover, it is encouraged that the digital library system should be inclusive of people with disabilities and other services beyond library services. It is further advised that the digital library system should provide services of traditional libraries which are possible digitally and subscribe to online information resources. This study shall serve as a guideline on implementation or establishment of digital libraries in a rural context. Therefore, other researchers can investigate the attitudes of digital library users and the likelihood of rural users on acceptance of digital libraries.Information ScienceM. Inf
    corecore