23 research outputs found

    Enhanced bibliographic data retrieval and visualization using query optimization and spectral centrality measure

    Get PDF
    As the amount of data generated is growing exponentially, harnessing such voluminous data has become a major challenge these years especially bibliographic data. This study proposing an enhance bibliographic data retrieval and visualization using hybrid clustering method consists of K-harmonic mean (KHM) and Spectral Algorithm and eigenvector centrality measure. A steady increase of publications recorded in the Digital Bibliography and Library Project (DBLP) can be identified from year 1936 until 2018, reaching the number 4,327,507 publications. This study will be focusing on the visualization of bibliographic data by retrieving the most influenced papers using hybrid clustering techniques and visualize it in an understandable network diagram using the weight age node. This web based approach will be using Java programming language and Mongo DB (NoSQL database) to improve the retrieval performance by 80%, precision of the search result of the bibliographic data by omitting non-significance papers and visualizing a clearer network diagram using centrality measure for better decision making. This method will make ease for the young researchers, educators and students to dive into the enormous real world social and biological network

    Bibliographic data retrieval using query optimization techniques in Mongodb

    Get PDF
    The rise of unstructured, semi structured and structured data making the data exploration task more and more challenging. The technologies are evolving especially the databases to tackle the rapidly growing data to get a meaningful insight. NoSQL (Not Only SQL) comes into picture to manage the distinguished characteristics of big data. The publications recorded in Digital Bibliography and Library Project (DBLP) also increasing steadily from 1936 until 2019 reaching the number 4,886,660 publications. A proper storage and retrieval technique are needed to get these data to be available for view with faster response time for any young researchers comes into the computer science field. This paper will be exploring the query optimization techniques in MongoDB using bibliographic data

    Metacognitive strategies in teaching and learning computer programming

    Get PDF
    It has been noted that teaching and learning programming is challenging in computer science education and that this is a universal problem. To understand and to code programs are perceived as being very challenging in computer science education. This is due to the demand for practical ability rather than theory alone. Studies have revealed that students with metacognitive management skills perform well in programming compared to lower-performing students. The more difficult the programming activity, the greater the need for the programmer to own metacognitive control skills. The cognitive processes in learning computer programming require a novice programmer to develop metacognitive skills. The main objective of this research work is to identify the metacognitive strategies in teaching and learning programming. An exploratory study was setup to identify the level of metacognition awareness of novice programmers using the MAI instrument. Interview sessions with expert lecturers were also conducted to identify the metacognitive approaches and the pedagogical method applied in the teaching and learning activities. The learning behaviours of novices were also identified through the interviewing sessions. It can be concluded that there is a correlation between the metacognitive awareness level of an individual and their academic achievement

    Hybrid machine learning model based on feature decomposition and entropy optimization for higher accuracy flood forecasting

    Get PDF
    The advancement of machine learning model has widely been adopted to provide flood forecast. However, the model must deal with the challenges to determine the most important features to be used in in flood forecast with high-dimensional non-linear time series when involving data from various stations. Decomposition of time-series data such as empirical mode decomposition, ensemble empirical mode decomposition and discrete wavelet transform are widely used for optimization of input; however, they have been done for single dimension time-series data which are unable to determine relationships between data in high dimensional time series.  In this study, hybrid machine learning models are developed based on this feature decomposition to forecast the monthly water level using monthly rainfall data. Rainfall data from eight stations in Kelantan River Basin are used in the hybrid model. To effectively select the best rainfall data from the multi-stations that provide higher accuracy, these rainfall data are analyzed with entropy called Mutual Information that measure the uncertainty of random variables from various stations. Mutual Information act as optimization method helps the researcher to select the appropriate features to score higher accuracy of the model. The experimental evaluations proved that the hybrid machine learning model based on the feature decomposition and ranked by Mutual Information can increase the accuracy of water level forecasting.  This outcome will help the authorities in managing the risk of flood and helping people in the evacuation process as an early warning can be assigned and disseminate to the citizen

    Rasch model assessment of algebraic word problem among year 8 Malaysian students

    Get PDF
    Word problems continue to be a challenge for students today. All students must meet the prerequisites for problem solving and reasoning skills, which are important components of the critical thinking component of 21st century skills. This study is being conducted to assess students’ strategies for solving word problems with numbers, consecutives, and ages. The Rasch model is used to analyze the item difficulty level of word problems and students’ strategies for solving ten-word problems at various levels of item difficulty in a similar trait. Then, Pearson correlation analysis is used to investigate the item difficulty level in relation to linguistic, algebraic, and arithmetic factors of word problems before evaluating students’ performance on solving these word problems using various strategies. Rasch model found these algebraic word-problem questions are slightly harder for year 8 Malaysian students in relative to an international standard. Meanwhile, the item difficulty of word problems is driven by linguistic and algebra factors where students can score accurately if the word problems contained explicit information. However, the students encountered difficulties while losing their solution strategy when the questions contained implicit data that demanded critical thinking ability

    A comparative analysis study on information security threat models: a propose for threat factor profiling

    Get PDF
    This study describes a comparative analysis study conducted on existing approaches, frameworks and relevant references used in field of information security. The purpose of this study is to identify suitable components in developing a threat factor profiling. By having a threat factor profiling, organizations will have a clear understanding of the threats that they face and enable them to implement a proactive incident management program that focuses on the threat components. This study also discusses the proposed threat factor profiling

    Fuzzy logic model for flood warning expert system integrating multi-agent and ontology

    Get PDF
    An expert system is vital in flood warning to determine the decision making output in order for a proper control of the input consisting of river level and rainfall. The river level and rainfall input is naturally stochastic, uncertain and unpredictable. Therefore, a fuzzy logic model of the flood warning expert system has been designed for user to obtain information about flood. This fuzzy model integrates multi-agent and ontology and expected to handle both uncertainty and accuracy issues whereas previous researches focus on handling either uncertainty or accuracy issues only. Simulation of outcome based on river level and rainfall is presented in this study

    Fuzzy AHP and TOPSIS in cross domain collaboration Recommendation with fuzzy visualization representation

    No full text
    —Cross domain collaboration recommendation method is proposed by combining fuzzy Analytic Hierarchy Process (AHP), fuzzy Technique for Order Preference by Similarity to Ideal Solution (TOPSIS), and fuzzy network graph for interactive visualization method. Existing cross-domain recommendation tackles the problem of sparsity, scalability, cold-start and serendipity issues found in single-domain, therefore the combination of fuzzy AHP and TOPSIS with visualization method may be able to give decision makers a quick start to initiate cross-domain collaborations. The proposed method is applied to the DBLP bibliographic citation dataset that consists of 10 domains in the field of computer science. Results show that the combination of fuzzy AHP and TOPSIS enables decision makers to find several authors from across domains that consist of 2.2 million publications in less than 3 minutes. The combination method will be represented in fuzzy visualization technique for fuzzy data. The establishment of the cross domain recommendation will set a stage for efficient preparation for researchers who are interested to venture into other domains to increase their research competency
    corecore