230,024 research outputs found

    Comparison of C4.5 Algorithm and Support Vector Machine in Predicting the Student Graduation Timeliness

    Get PDF
    In higher educational institutions, graduation rates are one of the many aspects to assess the quality of the learning process. Al-Hidayah Islamic University in Bogor is one of the established private Islamic universities to create skilled human resources with moral values required by many companies nowadays. Having another institution in Bogor as a competitor with the same direction and objective is a challenge for Al-Hidayah Islamic University. Thus a solution is required to face the competition. One solution is to predict the student graduation timeliness of the students using data mining method with classification function. The implemented methodology in the data mining is Discovery Knowledge of Database (KDD), starting from selecting, preprocessing, transformation, data mining, and evaluation/ interpretation. There were two Algorithm models used in this paper, namely C4.5 and Support Vector Machine (SVM). The classification procedure consists of predictor variables and one of the target variables. Predictor variables are gender, Grade Point Average, marital status, and job status. Rapid Miner software was used to process the data. The final results of both Algorithms show an 81% precision rate and 80% accuracy level for the C4.5 Algorithm, while SVM has an 88% precision rate and 85% accuracy level

    e-LION: Data integration semantic model to enhance predictive analytics in e-Learning.

    Get PDF
    The surge in online education emphasizes Learning Management Systems' (LMSs) crucial role in organizing learning resources and enabling teacher-learner communication. COVID-19 accelerated this, spiking engagement and substantial learning data. Academic institutions now have extensive data for comprehensive analysis to inform educational planning. However, integrating this diverse, sizable dataset from heterogeneous sources with semantic inconsistencies is challenging. Standardized integration schemes are needed for efficient utilization in machine learning models. Semantic web technologies offer a promising framework for semantic integration of e-learning data, enabling systematic consolidation, linkage, and advanced querying. We propose the e-LION (e-Learning Integration ONtology) semantic model to consolidate diverse e-learning knowledge bases and enhance analytical capabilities. Populated with real-world data from various LMSs, focusing on Software Engineering courses from the University of Malaga (Spain) and the Open University Learning, we validate it through four in-depth case studies. Advanced semantic querying techniques feed predictive models, perform time-series forecasting of student interactions based on final grades, and develop SWRL reasoning rules for student behavior classification. Validation study results are highly promising, suggesting e-LION as an ontological mediator scheme for integrating future semantic models within the e-learning domain. This opens exciting possibilities for leveraging the e-LION model to enhance educational planning, predictive modeling, and behavioral analysis, ultimately advancing e-learning through effective semantic integration and diverse learning-related data utilization.Universidad de MĂĄlaga. Campus de Excelencia Internacional AndalucĂ­a Tec

    Methodologies for the Automatic Location of Academic and Educational Texts on the Internet

    Get PDF
    Traditionally online databases of web resources have been compiled by a human editor, or though the submissions of authors or interested parties. Considerable resources are needed to maintain a constant level of input and relevance in the face of increasing material quantity and quality, and much of what is in databases is of an ephemeral nature. These pressures dictate that many databases stagnate after an initial period of enthusiastic data entry. The solution to this problem would seem to be the automatic harvesting of resources, however, this process necessitates the automatic classification of resources as ‘appropriate’ to a given database, a problem only solved by complex text content analysis. This paper outlines the component methodologies necessary to construct such an automated harvesting system, including a number of novel approaches. In particular this paper looks at the specific problems of automatically identifying academic research work and Higher Education pedagogic materials. Where appropriate, experimental data is presented from searches in the field of Geography as well as the Earth and Environmental Sciences. In addition, appropriate software is reviewed where it exists, and future directions are outlined

    Methodologies for the Automatic Location of Academic and Educational Texts on the Internet

    Get PDF
    Traditionally online databases of web resources have been compiled by a human editor, or though the submissions of authors or interested parties. Considerable resources are needed to maintain a constant level of input and relevance in the face of increasing material quantity and quality, and much of what is in databases is of an ephemeral nature. These pressures dictate that many databases stagnate after an initial period of enthusiastic data entry. The solution to this problem would seem to be the automatic harvesting of resources, however, this process necessitates the automatic classification of resources as ‘appropriate’ to a given database, a problem only solved by complex text content analysis. This paper outlines the component methodologies necessary to construct such an automated harvesting system, including a number of novel approaches. In particular this paper looks at the specific problems of automatically identifying academic research work and Higher Education pedagogic materials. Where appropriate, experimental data is presented from searches in the field of Geography as well as the Earth and Environmental Sciences. In addition, appropriate software is reviewed where it exists, and future directions are outlined

    LIPID MAPS: update to databases and tools for the lipidomics community

    Get PDF
    LIPID MAPS (LIPID Metabolites and Pathways Strategy), www.lipidmaps.org , provides a systematic and standardized approach to organizing lipid structural and biochemical data. Founded 20 years ago, the LIPID MAPS nomenclature and classification has become the accepted community standard. LIPID MAPS provides data bases for cataloging and identifying lipids at varying levels of characterization in addition to numerous software tools and educational resources, and became an ELIXIR-UK data resource in 2020. This paper describes the expansion of existing databases in LIPID MAPS, including richer metadata with literature provenance, taxonomic data and improved interoperability to facilitate FAIR compliance. A joint project funded by ELIXIR-UK, in collaboration with WikiPathways, curates and hosts pathway data, and annotates lipids in the context of their biochemical pathways. Updated features of the search infrastructure are described along with implementation of programmatic access via API and SPARQL. New lipid-specific databases have been developed and provision of lipidomics tools to the community has been updated. Training and engagement have been expanded with webinars, podcasts and an online training school

    Systematic development of courseware systems

    Get PDF
    Various difficulties have been reported in relation to the development of courseware systems. A central problem is to address the needs of not only the learner, but also instructor, developer, and other stakeholders, and to integrate these different needs. Another problem area is courseware architectures, to which much work has been dedicated recently. We present a systematic approach to courseware development – a methodology for courseware engineering – that addresses these problems. This methodology is rooted in the educational domain and is based on methods for software development in this context. We illustrate how this methodology can improve the quality of courseware systems and the development process

    A hybrid method for the analysis of learner behaviour in active learning environments

    Get PDF
    Software-mediated learning requires adjustments in the teaching and learning process. In particular active learning facilitated through interactive learning software differs from traditional instructor-oriented, classroom-based teaching. We present behaviour analysis techniques for Web-mediated learning. Motivation, acceptance of the learning approach and technology, learning organisation and actual tool usage are aspects of behaviour that require different analysis techniques to be used. A hybrid method based on a combination of survey methods and Web usage mining techniques can provide accurate and comprehensive analysis results. These techniques allow us to evaluate active learning approaches implemented in form of Web tutorials

    Quality assurance for digital learning object repositories: issues for the metadata creation process

    Get PDF
    Metadata enables users to find the resources they require, therefore it is an important component of any digital learning object repository. Much work has already been done within the learning technology community to assure metadata quality, focused on the development of metadata standards, specifications and vocabularies and their implementation within repositories. The metadata creation process has thus far been largely overlooked. There has been an assumption that metadata creation will be straightforward and that where machines cannot generate metadata effectively, authors of learning materials will be the most appropriate metadata creators. However, repositories are reporting difficulties in obtaining good quality metadata from their contributors, and it is becoming apparent that the issue of metadata creation warrants attention. This paper surveys the growing body of evidence, including three UK-based case studies, scopes the issues surrounding human-generated metadata creation and identifies questions for further investigation. Collaborative creation of metadata by resource authors and metadata specialists, and the design of tools and processes, are emerging as key areas for deeper research. Research is also needed into how end users will search learning object repositories

    The RCSB Protein Data Bank: views of structural biology for basic and applied research and education.

    Get PDF
    The RCSB Protein Data Bank (RCSB PDB, http://www.rcsb.org) provides access to 3D structures of biological macromolecules and is one of the leading resources in biology and biomedicine worldwide. Our efforts over the past 2 years focused on enabling a deeper understanding of structural biology and providing new structural views of biology that support both basic and applied research and education. Herein, we describe recently introduced data annotations including integration with external biological resources, such as gene and drug databases, new visualization tools and improved support for the mobile web. We also describe access to data files, web services and open access software components to enable software developers to more effectively mine the PDB archive and related annotations. Our efforts are aimed at expanding the role of 3D structure in understanding biology and medicine
    • 

    corecore