151 research outputs found

    Personalized content provision for virtual learning environments via the semantic web

    Get PDF
    In this paper we discuss how we may personalize e-learning along three distinct axes, namely: teaching and learning pedagogical philosophies, personalized educational processes to taste and the coordination of these processes during execution. In doing so we are concerned with supporting users' choices of educational options in course delivery via the Web services. In the work presented here, we assess the practical needs of learners and tutors and then the main research problems are analysed from a practical and pragmatic point of view. Following on from this the design of an intelligent virtual learning environment (VLE) is described to map a set of extensive didactic paradigms, which is represented by a system model and architecture. In this system, the semantic information of learning units and processes (e.g. the relationships among units) can be described and integrated in terms of various requirements of our users. As a result instructional materials with a wide variety of executional options and conditions can be built. Furthermore, through reassembling the semantics of learning content according to users' new demands, our target audience (both student and content deliverers) can change their particular educational experience dynamically. This VLE can provide high-powered pedagogy-layered personalization - thus enabling new managed e-learning Web services and applications

    A Knowledge Based Educational (KBEd) framework for enhancing practical skills in engineering distance learners through an augmented reality environment

    Get PDF
    The technology advancement has changed distance learning teaching and learning approaches, for example, virtual laboratories are increasingly used to deliver engineering courses. These advancements enhance the distance learners practical experience of engineering courses. While most of these efforts emphasise the importance of the technology, few have sought to understand the techniques for capturing, modelling and automating the on-campus laboratory tutors’ knowledge. The lack of automation of tutors’ knowledge has also affected the practical learning outcomes of engineering distance learners. Hence, there is a need to explore further on how to integrate the tutor's knowledge, which is necessary for imparting and assessing practical skills through current technological advances in distance learning. One approach to address this concern is through the use of Knowledge Based Engineering (KBE) principles. These KBE principles facilitate the utilisation of standardised methods for capturing, modelling and embedding experts’ knowledge into engineering design applications for the automation of product design. Hence, utilising such principles could facilitate, automating engineering laboratory tutors’ knowledge for teaching and assessing practical skills. However, there is limited research in the application of KBE principles in the educational domain. Therefore, this research explores the use of KBE principles to automate instructional design in engineering distance learning technologies. As a result, a Knowledge Based Educational (KBEd) framework that facilitates the capturing, modelling and automating on-campus tutors’ knowledge and introduces it to distance learning and teaching approaches. This study used a four-stage experimental approach, which involved rapid prototyping method to design and develop the proposed KBEd framework to a functional prototype. The developed prototype was further refined through internal and external expert group using face validity methods such as questionnaire, observation and discussion. The refined prototype was then evaluated through welding task use-case. The use cases were assessed by first year engineering undergraduate students with no prior experience of welding from Birmingham City University. The participants were randomly separated into two groups (N = 46). One group learned and practised basic welding in the proposed KBEd system, while the other learned and practised in the conventional on-campus environment. A concurrent validity assessment was used in determining the usefulness of the proposed system in learning hands-on practical engineering skills through proposed KBEd system. The results of the evaluation indicate that students who trained with the proposed KBEd system successfully gained the practical skills equivalent to those in the real laboratory environment. Although there was little performance variation between the two groups, it was rooted in the limitations of the system’s hardware. The learning outcomes achieved also demonstrated the successful application of KBE principles in capturing, modelling and transforming the knowledge from the real tutor to the AI tutor for automating the teaching and assessing of the practical skills for distance learners. Further the data analysis has shown the potential of KBEd to be extendable to other taught distance-learning courses involving practical skills

    Database support for large-scale multimedia retrieval

    Get PDF
    With the increasing proliferation of recording devices and the resulting abundance of multimedia data available nowadays, searching and managing these ever-growing collections becomes more and more difficult. In order to support retrieval tasks within large multimedia collections, not only the sheer size, but also the complexity of data and their associated metadata pose great challenges, in particular from a data management perspective. Conventional approaches to address this task have been shown to have only limited success, particularly due to the lack of support for the given data and the required query paradigms. In the area of multimedia research, the missing support for efficiently and effectively managing multimedia data and metadata has recently been recognised as a stumbling block that constraints further developments in the field. In this thesis, we bridge the gap between the database and the multimedia retrieval research areas. We approach the problem of providing a data management system geared towards large collections of multimedia data and the corresponding query paradigms. To this end, we identify the necessary building-blocks for a multimedia data management system which adopts the relational data model and the vector-space model. In essence, we make the following main contributions towards a holistic model of a database system for multimedia data: We introduce an architectural model describing a data management system for multimedia data from a system architecture perspective. We further present a data model which supports the storage of multimedia data and the corresponding metadata, and provides similarity-based search operations. This thesis describes an extensive query model for a very broad range of different query paradigms specifying both logical and executional aspects of a query. Moreover, we consider the efficiency and scalability of the system in a distribution and a storage model, and provide a large and diverse set of index structures for high-dimensional data coming from the vector-space model. Thee developed models crystallise into the scalable multimedia data management system ADAMpro which has been implemented within the iMotion/vitrivr retrieval stack. We quantitatively evaluate our concepts on collections that exceed the current state of the art. The results underline the benefits of our approach and assist in understanding the role of the introduced concepts. Moreover, the findings provide important implications for future research in the field of multimedia data management

    Workflow repository for providing configurable workflow in ERP

    Get PDF
    Workflow pada ERP dengan domain fungsi yang besar rentan dengan adanya duplikasi. Membuat workflow repository yang menyimpan berbagai macam workflow dari proses bisnis ERP yang dapat digunakan untuk menyusun workflow baru sesuai kebutuhan tenant baru Metode yang diusulkan: Metode yang diusulkan terdiri dari 2 tahapan, preprocessing dan processing. Tahap preprocessing bertujuan untuk mencari common dan sub variant dari existing workflow variant. Workflow variant yang disimpan oleh pengguna adalah Procure to Pay workflow. Variasi tersebut diseleksi berdasarkan kemiripannya dengan similarity filtering, kemudian dimerge untuk mencari common dan sub variantnya. Common dan sub variant disimpan menggunakan metadata yang dipetakan pada basis data relasional. Deteksi common dan sub variant workflow mencapai tingkat akurasi sebesar 92%. Ccommon workflow terdiri dari 3-common dari 8-variant workflow. Common workflow tersebut memiliki tingkat kompleksitas lebih rendah 10% dari model sebelumnya. Tahapan processing adalah tahapan penyediaan configurable workflow. Pengguna memasukan query model untuk mencari workflow yang diinginkan. Dengan menggunakan metode similarity filtering, didapatkan common dan/atau sub variant yang memungkinkan. Pengguna dapat menggunakan common workflow melalui workflow designer untuk melakukan rekomposisi ulang. Penyediaan configurable workflow oleh ERP mencapai tingkat 100% dimana apapun yang diinginkan pengguna dapat disediakaan workflownya oleh ERP, ataupun sebagai dasar membentuk workflow yang lain. Berdasarkan hasil percobaan, tempat penyimpanan workflow dapat dibangun dengan arsitektur yang diajukan dan mampu menyimpan dan menyediakan workflow. Tempat penyimpanan ERP mampu mendeteksi workflow yang bersifat common dan sub variant. Tempat penyimpanan ERP mampu menyediakan configurable workflow, dimana pengguna dapat memanfaatkan common dan sub variant workflow untuk menjadi dasar mengkomposisi workflow yang lain. =================================================================================================== Workflow in ERP which covered big domain faced duplication issues. Scope of this research was developing workflow from business process ERP which could be used for required workflow as user needs. Proposed approach consisted of 2 stages preprocessing and processing. Preprocessing stages aimed for finding common and variant of sub workflow based on existing workflow variant. The workflow variants that were stored by user were procured to pay workflow. The workflows was filtered by similarity filtering method then merged for identifying the common and variant of sub workflow. The common and sub variant workflow were stored using metadata that mapped into relational database. The common and variant of sub workflow detection achieved 92% accuracy. The common workflow consisted of 3- the common workflow from 8-variant workflow. The common workflow has 10% lesser complexity than its predecessor. Processing was providing configurable workflow. User inputted query model to find required workflow. Utilizing similarity filtering, possible the common and variant of sub workflow was collected. User used the common workflow through workflow designer to recompose. Providing configurable workflow ERP achieved 100%, where any user need would be provided by ERP, as workflow or as based template for creating other. Based on evaluation, repository was built based on proposed architecture and was able to store or provide workflow. Repository detected workflow whether common or variant of sub workflow. Repository ERP was able to provide configurable ERP, where user utilized common and variant of sub workflow as based for creating one of their need

    Landscape Analysis for the Specimen Data Refinery

    Get PDF
    This report reviews the current state-of-the-art applied approaches on automated tools, services and workflows for extracting information from images of natural history specimens and their labels. We consider the potential for repurposing existing tools, including workflow management systems; and areas where more development is required. This paper was written as part of the SYNTHESYS+ project for software development teams and informatics teams working on new software-based approaches to improve mass digitisation of natural history specimens

    MuCIGREF: multiple computer-interpretable guideline representation and execution framework for managing multimobidity care

    Get PDF
    Clinical Practice Guidelines (CPGs) supply evidence-based recommendations to healthcare professionals (HCPs) for the care of patients. Their use in clinical practice has many benefits for patients, HCPs and treating medical centres, such as enhancing the quality of care, and reducing unwanted care variations. However, there are many challenges limiting their implementations. Initially, CPGs predominantly consider a specific disease, and only few of them refer to multimorbidity (i.e. the presence of two or more health conditions in an individual) and they are not able to adapt to dynamic changes in patient health conditions. The manual management of guideline recommendations are also challenging since recommendations may adversely interact with each other due to their competing targets and/or they can be duplicated when multiple of them are concurrently applied to a multimorbid patient. These may result in undesired outcomes such as severe disability, increased hospitalisation costs and many others. Formalisation of CPGs into a Computer Interpretable Guideline (CIG) format, allows the guidelines to be interpreted and processed by computer applications, such as Clinical Decision Support Systems (CDSS). This enables provision of automated support to manage the limitations of guidelines. This thesis introduces a new approach for the problem of combining multiple concurrently implemented CIGs and their interrelations to manage multimorbidity care. MuCIGREF (Multiple Computer-Interpretable Guideline Representation and Execution Framework), is proposed whose specific objectives are to present (1) a novel multiple CIG representation language, MuCRL, where a generic ontology is developed to represent knowledge elements of CPGs and their interrelations, and to create the multimorbidity related associations between them. A systematic literature review is conducted to discover CPG representation requirements and gaps in multimorbidity care management. The ontology is built based on the synthesis of well-known ontology building lifecycle methodologies. Afterwards, the ontology is transformed to a metamodel to support the CIG execution phase; and (2) a novel real-time multiple CIG execution engine, MuCEE, where CIG models are dynamically combined to generate consistent and personalised care plans for multimorbid patients. MuCEE involves three modules as (i) CIG acquisition module, transfers CIGs to the personal care plan based on the patient’s health conditions and to supply CIG version control; (ii) parallel CIG execution module, combines concurrently implemented multiple CIGs by performing concurrency management, time-based synchronisation (e.g., multi-activity merging), modification, and timebased optimisation of clinical activities; and (iii) CIG verification module, checks missing information, and inconsistencies to support CIG execution phases. Rulebased execution algorithms are presented for each module. Afterwards, a set of verification and validation analyses are performed involving real-world multimorbidity cases studies and comparative analyses with existing works. The results show that the proposed framework can combine multiple CIGs and dynamically merge, optimise and modify multiple clinical activities of them involving patient data. This framework can be used to support HCPs in a CDSS setting to generate unified and personalised care recommendations for multimorbid patients while merging multiple guideline actions and eliminating care duplications to maintain their safety and supplying optimised health resource management, which may improve operational and cost efficiency in real world-cases, as well

    Knowledge-base and techniques for effective service-oriented programming & management of hybrid processes

    Full text link
    Recent advances in Web 2.0, SOA, crowd-sourcing, social and collaboration technologies, as well as cloud-computing, have truly transformed the Internet into a global development and deployment platform. As a result, developers have been presented with ubiquitous access to countless Web-services, resources and tools. However, while enabling tremendous automation and reuse opportunities, new productivity challenges have also emerged: The exploitation of services and resources nonetheless requires skilled programmers and a development-centric approach; it is thus inevitably susceptible to the same repetitive, error-prone and time consuming integration work each time a developer integrates a new API. Business Process Management on the other hand were proposed to support service-based integration. It provided the benefit of automation and modelling, which appealed to non-technical domain-experts. The problem however: it proves too rigid for unstructured processes. Thus, without this level of support, building new application either requires extensive manual programming or resorting to homebrew solutions. Alternatively, with the proliferation of SaaS, various such tools could be used for independent portions of the overall process - although this either presupposes conforming to the in-built process, or results in "shadow processes" via use of e-mail or the like, in order to exchange information and share decisions. There has therefore been an inevitable gap in technological support between structured and unstructured processes. To address these challenges, this thesis deals with transitioning process-support from structured to unstructured. We have been motivated to harness the foundational capabilities of BPM for its application to unstructured processes. We propose to achieve this by: First, addressing the productivity challenges of Web-services integration - simplifying this process - whilst encouraging an incremental curation and collective reuse approach. We then extend this to propose an innovative Hybrid-Process Management Platform that holistically combines structured, semi-structured and unstructured activities, based on a unified task-model that encapsulates a spectrum of process specificity. We have thus aimed to bridge the current lacking technology gap. The approach presented has been exposed as service-based libraries and tools. Whereby, we have devised several use-case scenarios and conducted user-studies in order to evaluate the overall effectiveness of our proposed work

    Expressive movement generation with machine learning

    Get PDF
    Movement is an essential aspect of our lives. Not only do we move to interact with our physical environment, but we also express ourselves and communicate with others through our movements. In an increasingly computerized world where various technologies and devices surround us, our movements are essential parts of our interaction with and consumption of computational devices and artifacts. In this context, incorporating an understanding of our movements within the design of the technologies surrounding us can significantly improve our daily experiences. This need has given rise to the field of movement computing – developing computational models of movement that can perceive, manipulate, and generate movements. In this thesis, we contribute to the field of movement computing by building machine-learning-based solutions for automatic movement generation. In particular, we focus on using machine learning techniques and motion capture data to create controllable, generative movement models. We also contribute to the field by creating datasets, tools, and libraries that we have developed during our research. We start our research by reviewing the works on building automatic movement generation systems using machine learning techniques and motion capture data. Our review covers background topics such as high-level movement characterization, training data, features representation, machine learning models, and evaluation methods. Building on our literature review, we present WalkNet, an interactive agent walking movement controller based on neural networks. The expressivity of virtual, animated agents plays an essential role in their believability. Therefore, WalkNet integrates controlling the expressive qualities of movement with the goal-oriented behaviour of an animated virtual agent. It allows us to control the generation based on the valence and arousal levels of affect, the movement’s walking direction, and the mover’s movement signature in real-time. Following WalkNet, we look at controlling movement generation using more complex stimuli such as music represented by audio signals (i.e., non-symbolic music). Music-driven dance generation involves a highly non-linear mapping between temporally dense stimuli (i.e., the audio signal) and movements, which renders a more challenging modelling movement problem. To this end, we present GrooveNet, a real-time machine learning model for music-driven dance generation

    An automated pipeline for constructing personalized virtual brains from multimodal neuroimaging data

    Get PDF
    AbstractLarge amounts of multimodal neuroimaging data are acquired every year worldwide. In order to extract high-dimensional information for computational neuroscience applications standardized data fusion and efficient reduction into integrative data structures are required. Such self-consistent multimodal data sets can be used for computational brain modeling to constrain models with individual measurable features of the brain, such as done with The Virtual Brain (TVB). TVB is a simulation platform that uses empirical structural and functional data to build full brain models of individual humans. For convenient model construction, we developed a processing pipeline for structural, functional and diffusion-weighted magnetic resonance imaging (MRI) and optionally electroencephalography (EEG) data. The pipeline combines several state-of-the-art neuroinformatics tools to generate subject-specific cortical and subcortical parcellations, surface-tessellations, structural and functional connectomes, lead field matrices, electrical source activity estimates and region-wise aggregated blood oxygen level dependent (BOLD) functional MRI (fMRI) time-series. The output files of the pipeline can be directly uploaded to TVB to create and simulate individualized large-scale network models that incorporate intra- and intercortical interaction on the basis of cortical surface triangulations and white matter tractograpy. We detail the pitfalls of the individual processing streams and discuss ways of validation. With the pipeline we also introduce novel ways of estimating the transmission strengths of fiber tracts in whole-brain structural connectivity (SC) networks and compare the outcomes of different tractography or parcellation approaches. We tested the functionality of the pipeline on 50 multimodal data sets. In order to quantify the robustness of the connectome extraction part of the pipeline we computed several metrics that quantify its rescan reliability and compared them to other tractography approaches. Together with the pipeline we present several principles to guide future efforts to standardize brain model construction. The code of the pipeline and the fully processed data sets are made available to the public via The Virtual Brain website (thevirtualbrain.org) and via github (https://github.com/BrainModes/TVB-empirical-data-pipeline). Furthermore, the pipeline can be directly used with High Performance Computing (HPC) resources on the Neuroscience Gateway Portal (http://www.nsgportal.org) through a convenient web-interface

    Study about the relation between different design methodologies

    Get PDF
    Estudio y comparativa entre varias metodologías de diferentes autores con el fin de describir la más adecuada dependiendo del tipo de diseño que se está desarrollando. A lo largo de la carrera, los estudiantes de diseño tratan con varias y diferentes metodologías de diseño para aplicar en sus proyectos. Existen gran variedad de metodologías y muchas de ellas se contradicen entre sí por lo que queda a juicio del estudiante elegir la que a su criterio es la más adecuada. La elección de una correcta metodología puede condicionar el resultado de un proyecto de diseño. El objetivo de este trabajo de fin de grado es estudiar y analizar diferentes metodologías de las estudiadas a lo largo de la carrera y también aquellas presentes en libros que se nos han recomendado. Con ello planeamos establecer una clasificación atendiendo a diferentes aspectos lo que nos ayudará a identificar las diferentes opciones y oportunidades que ofrecen. Existen diferentes tipos de proyectos en función de los requerimientos iniciales, dado el usuario, el material, una función, para una compañía concreta, etc. El objetivo final será poder establecer la metodología más adecuada para cada tipo de proyecto, todo ello acompañado con ejemplos de diseño de producto
    • …
    corecore