41,450 research outputs found

    Utilising ontology-based modelling for learning content management

    Get PDF
    Learning content management needs to support a variety of open, multi-format Web-based software applications. We propose multidimensional, model-based semantic annotation as a way to support the management of access to and change of learning content. We introduce an information architecture model as the central contribution that supports multi-layered learning content structures. We discuss interactive query access, but also change management for multi-layered learning content management. An ontology-enhanced traceability approach is the solution

    Why not empower knowledge workers and lifelong learners to develop their own environments?

    Get PDF
    In industrial and educational practice, learning environments are designed and implemented by experts from many different fields, reaching from traditional software development and product management to pedagogy and didactics. Workplace and lifelong learning, however, implicate that learners are more self-motivated, capable, and self-confident in achieving their goals and, consequently, tempt to consider that certain development tasks can be shifted to end-users in order to facilitate a more flexible, open, and responsive learning environment. With respect to streams like end-user development and opportunistic design, this paper elaborates a methodology for user-driven environment design for action-based activities. Based on a former research approach named 'Mash-Up Personal Learning Environments'(MUPPLE) we demonstrate how workplace and lifelong learners can be empowered to develop their own environment for collaborating in learner networks and which prerequisites and support facilities are necessary for this methodology

    Separating Agent-Functioning and Inter-Agent Coordination by Activated Modules: The DECOMAS Architecture

    Full text link
    The embedding of self-organizing inter-agent processes in distributed software applications enables the decentralized coordination system elements, solely based on concerted, localized interactions. The separation and encapsulation of the activities that are conceptually related to the coordination, is a crucial concern for systematic development practices in order to prepare the reuse and systematic integration of coordination processes in software systems. Here, we discuss a programming model that is based on the externalization of processes prescriptions and their embedding in Multi-Agent Systems (MAS). One fundamental design concern for a corresponding execution middleware is the minimal-invasive augmentation of the activities that affect coordination. This design challenge is approached by the activation of agent modules. Modules are converted to software elements that reason about and modify their host agent. We discuss and formalize this extension within the context of a generic coordination architecture and exemplify the proposed programming model with the decentralized management of (web) service infrastructures

    Assessment of Generic Skills through an Organizational Learning Process Model

    Get PDF
    This contribution has been published in this repository with the permission of the publisher. This contribution was presented in WEBIST 2018 (http://www.webist.org/?y=2018) and has been published by SCITEPRESS in http://www.scitepress.org/PublicationsDetail.aspx?ID=y9Yt0eHt02o=&.The performance in generic skills is increasingly important for organizations to succeed in the current competitive environment. However, assessing the level of performance in generic skills of the members of an organization is a challenging task, subject to both subjectivity and scalability issues. Organizations usually lay their organizational learning processes on a Knowledge Management System (KMS). This work presents a process model to support managers of KMSs in the assessment of their individuals’ generic skills. The process model was deployed through an extended version of a learning management system. It was connected with different information system tools specifically developed to enrich its features. A case study with Computer Science final-year students working in a software system was conducted following an authentic learning approach, showing promising results.Visaigle Project (grant TIN2017-85797-R)

    Position paper on realizing smart products: challenges for Semantic Web technologies

    Get PDF
    In the rapidly developing space of novel technologies that combine sensing and semantic technologies, research on smart products has the potential of establishing a research field in itself. In this paper, we synthesize existing work in this area in order to define and characterize smart products. We then reflect on a set of challenges that semantic technologies are likely to face in this domain. Finally, in order to initiate discussion in the workshop, we sketch an initial comparison of smart products and semantic sensor networks from the perspective of knowledge technologies

    AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments

    Get PDF
    This report considers the application of Articial Intelligence (AI) techniques to the problem of misuse detection and misuse localisation within telecommunications environments. A broad survey of techniques is provided, that covers inter alia rule based systems, model-based systems, case based reasoning, pattern matching, clustering and feature extraction, articial neural networks, genetic algorithms, arti cial immune systems, agent based systems, data mining and a variety of hybrid approaches. The report then considers the central issue of event correlation, that is at the heart of many misuse detection and localisation systems. The notion of being able to infer misuse by the correlation of individual temporally distributed events within a multiple data stream environment is explored, and a range of techniques, covering model based approaches, `programmed' AI and machine learning paradigms. It is found that, in general, correlation is best achieved via rule based approaches, but that these suffer from a number of drawbacks, such as the difculty of developing and maintaining an appropriate knowledge base, and the lack of ability to generalise from known misuses to new unseen misuses. Two distinct approaches are evident. One attempts to encode knowledge of known misuses, typically within rules, and use this to screen events. This approach cannot generally detect misuses for which it has not been programmed, i.e. it is prone to issuing false negatives. The other attempts to `learn' the features of event patterns that constitute normal behaviour, and, by observing patterns that do not match expected behaviour, detect when a misuse has occurred. This approach is prone to issuing false positives, i.e. inferring misuse from innocent patterns of behaviour that the system was not trained to recognise. Contemporary approaches are seen to favour hybridisation, often combining detection or localisation mechanisms for both abnormal and normal behaviour, the former to capture known cases of misuse, the latter to capture unknown cases. In some systems, these mechanisms even work together to update each other to increase detection rates and lower false positive rates. It is concluded that hybridisation offers the most promising future direction, but that a rule or state based component is likely to remain, being the most natural approach to the correlation of complex events. The challenge, then, is to mitigate the weaknesses of canonical programmed systems such that learning, generalisation and adaptation are more readily facilitated
    corecore