5,852 research outputs found

    The SEEMP Approach to Semantic Interoperability for E-Employment

    Get PDF
    SEEMP is a European Project that promotes increased partnership between labour market actors and the development of closer relations between private and public employment services, making optimal use of the various actors’ specific characteristics, thus providing job-seekers and employers with better services. The need for a flexible collaboration gives rise to the issue of interoperability in both data exchange and share of services. SEEMP proposes a solution that relies on the concepts of services and semantics in order to provide a meaningful service-based communication among labour market actors requiring a minimal shared commitment

    Journalistic Knowledge Platforms: from Idea to Realisation

    Get PDF
    Journalistiske kunnskapsplattformer (JKPer) er en type intelligente informasjonssystemer designet for Ä forbedre nyhetsproduksjonsprosesser ved Ä kombinere stordata, kunstig intelligens (KI) og kunnskapsbaser for Ä stÞtte journalister. Til tross for sitt potensial for Ä revolusjonere journalistikkfeltet, har adopsjonen av JKPer vÊrt treg, med forskere og store nyhetsutlÞp involvert i forskning og utvikling av JKPer. Den langsomme adopsjonen kan tilskrives den tekniske kompleksiteten til JKPer, som har fÞrt til at nyhetsorganisasjoner stoler pÄ flere uavhengige og oppgavespesifikke produksjonssystemer. Denne situasjonen kan Þke ressurs- og koordineringsbehovet og kostnadene, samtidig som den utgjÞr en trussel om Ä miste kontrollen over data og havne i leverandÞrlÄssituasjoner. De tekniske kompleksitetene forblir en stor hindring, ettersom det ikke finnes en allerede godt utformet systemarkitektur som ville lette realiseringen og integreringen av JKPer pÄ en sammenhengende mÄte over tid. Denne doktoravhandlingen bidrar til teorien og praksisen rundt kunnskapsgrafbaserte JKPer ved Ä studere og designe en programvarearkitektur som referanse for Ä lette iverksettelsen av konkrete lÞsninger og adopsjonen av JKPer. Den fÞrste bidraget til denne doktoravhandlingen gir en grundig og forstÄelig analyse av ideen bak JKPer, fra deres opprinnelse til deres nÄvÊrende tilstand. Denne analysen gir den fÞrste studien noensinne av faktorene som har bidratt til den langsomme adopsjonen, inkludert kompleksiteten i deres sosiale og tekniske aspekter, og identifiserer de stÞrste utfordringene og fremtidige retninger for JKPer. Den andre bidraget presenterer programvarearkitekturen som referanse, som gir en generisk blÄkopi for design og utvikling av konkrete JKPer. Den foreslÄtte referansearkitekturen definerer ogsÄ to nye typer komponenter ment for Ä opprettholde og videreutvikle KI-modeller og kunnskapsrepresentasjoner. Den tredje presenterer et eksempel pÄ iverksettelse av programvarearkitekturen som referanse og beskriver en prosess for Ä forbedre effektiviteten til informasjonsekstraksjonspipelines. Denne rammen muliggjÞr en fleksibel, parallell og samtidig integrering av teknikker for naturlig sprÄkbehandling og KI-verktÞy. I tillegg diskuterer denne avhandlingen konsekvensene av de nyeste KI-fremgangene for JKPer og ulike etiske aspekter ved bruk av JKPer. Totalt sett gir denne PhD-avhandlingen en omfattende og grundig analyse av JKPer, fra teorien til designet av deres tekniske aspekter. Denne forskningen tar sikte pÄ Ä lette vedtaket av JKPer og fremme forskning pÄ dette feltet.Journalistic Knowledge Platforms (JKPs) are a type of intelligent information systems designed to augment news creation processes by combining big data, artificial intelligence (AI) and knowledge bases to support journalists. Despite their potential to revolutionise the field of journalism, the adoption of JKPs has been slow, with scholars and large news outlets involved in the research and development of JKPs. The slow adoption can be attributed to the technical complexity of JKPs that led news organisation to rely on multiple independent and task-specific production system. This situation can increase the resource and coordination footprint and costs, at the same time it poses a threat to lose control over data and face vendor lock-in scenarios. The technical complexities remain a major obstacle as there is no existing well-designed system architecture that would facilitate the realisation and integration of JKPs in a coherent manner over time. This PhD Thesis contributes to the theory and practice on knowledge-graph based JKPs by studying and designing a software reference architecture to facilitate the instantiation of concrete solutions and the adoption of JKPs. The first contribution of this PhD Thesis provides a thorough and comprehensible analysis of the idea of JKPs, from their origins to their current state. This analysis provides the first-ever study of the factors that have contributed to the slow adoption, including the complexity of their social and technical aspects, and identifies the major challenges and future directions of JKPs. The second contribution presents the software reference architecture that provides a generic blueprint for designing and developing concrete JKPs. The proposed reference architecture also defines two novel types of components intended to maintain and evolve AI models and knowledge representations. The third presents an instantiation example of the software reference architecture and details a process for improving the efficiency of information extraction pipelines. This framework facilitates a flexible, parallel and concurrent integration of natural language processing techniques and AI tools. Additionally, this Thesis discusses the implications of the recent AI advances on JKPs and diverse ethical aspects of using JKPs. Overall, this PhD Thesis provides a comprehensive and in-depth analysis of JKPs, from the theory to the design of their technical aspects. This research aims to facilitate the adoption of JKPs and advance research in this field.Doktorgradsavhandlin

    Standardizing Process-Data Exploitation by Means of a Process-Instance Metamodel

    Get PDF
    The analysis of data produced by the enterprises during pro cesses execution is key to know how these processes are working and how they can be improved. These data may be consumed to make different types of analysis, for example, data could be used as input for process dis covery, decision-making and even process querying tools. However, each type of analysis needs data in different formats because they use different techniques and tackle the problem from a different point of view. Fortu nately, if we look at the data exploitation problem from a higher level of abstraction perspective, we can realize that all the points of view share a common ground: the business process model and its instantiation are in the kernel of all of them. In this paper, we propose the use of a Busi ness Process Instance Metamodel, which serves as a common interface to make independent the applications producing business process data from those applications that consume and exploit it. A tool has been implemented as a proof of concept to facilitate the matching between data from different data sources and the metamodel.Ministerio de Ciencia y TecnologĂ­a TIN2015-63502-C3-2-RMinisterio de Ciencia y TecnologĂ­a TIN2016-75394-

    Research Methods for Non-Representational Approaches of Organizational Complexity. The Dialogical and Mediated Inquiry

    Get PDF
    This paper explores the methodological implications of non-representational approaches of organizational complexity. Representational theories focus on the syntactic complexity of systems, whereas organizing processes are predominantly characterized by semantic and pragmatic forms of complexity. After underlining the contribution of non-representational approaches to the study of organizations, the paper warns against the risk of confining the critique of representational frameworks to paradoxical dichotomies like intuition versus reflexive thought or theorizing versus experimenting. To sort out this difficulty, it is suggested to use a triadic theory of interpretation, and more particularly the concepts of semiotic mediation, inquiry and dialogism. Semiotic mediation dynamically links situated experience and generic classes of meanings. Inquiry articulates logical thinking, narrative thinking and experimenting. Dialogism conceptualizes the production of meaning through the situated interactions of actors. A methodological approach based on those concepts, “the dialogical and mediated inquiry” (DMI), is proposed and experimented in a case study about work safety in the construction industry. This interpretive view requires complicating the inquiring process rather than the mirroring models of reality. In DMI, the inquiring process is complicated by establishing pluralist communities of inquiry in which different perspectives challenge each other. Finally the paper discusses the specific contribution of this approach compared with other qualitative methods and its present limits.Activity; Dialogism; Inquiry; Interpretation; Pragmatism; Research Methods; Semiotic Mediation; Work Safety

    A Survey on Linked Data and the Social Web as facilitators for TEL recommender systems

    Get PDF
    Personalisation, adaptation and recommendation are central features of TEL environments. In this context, information retrieval techniques are applied as part of TEL recommender systems to filter and recommend learning resources or peer learners according to user preferences and requirements. However, the suitability and scope of possible recommendations is fundamentally dependent on the quality and quantity of available data, for instance, metadata about TEL resources as well as users. On the other hand, throughout the last years, the Linked Data (LD) movement has succeeded to provide a vast body of well-interlinked and publicly accessible Web data. This in particular includes Linked Data of explicit or implicit educational nature. The potential of LD to facilitate TEL recommender systems research and practice is discussed in this paper. In particular, an overview of most relevant LD sources and techniques is provided, together with a discussion of their potential for the TEL domain in general and TEL recommender systems in particular. Results from highly related European projects are presented and discussed together with an analysis of prevailing challenges and preliminary solutions.LinkedU

    Facilitating Scientometrics in Learning Analytics and Educational Data Mining – the LAK Dataset

    Get PDF
    The Learning Analytics and Knowledge (LAK) Dataset represents an unprecedented corpus which exposes a near complete collection of bibliographic resources for a specific research discipline, namely the connected areas of Learning Analytics and Educational Data Mining. Covering over five years of scientific literature from the most relevant conferences and journals, the dataset provides Linked Data about bibliographic metadata as well as full text of the paper body. The latter was enabled through special licensing agreements with ACM for publications not yet available through open access. The dataset has been designed following established Linked Data pattern, reusing established vocabularies and providing links to established schemas and entity coreferences in related datasets. Given the temporal and topic coverage of the dataset, being a near-complete corpus of research publications of a particular discipline, it facilitates scientometric investigations, for instance, about the evolution of a scientific field over time, or correlations with other disciplines, what is documented through its usage in a wide range of scientific studies and applications
    • 

    corecore