59 research outputs found

    STSIM: A Stakeholder-Teacher-Student Interactive Model in the College of Computer and Information Sciences at King Saud University

    Get PDF
    The stakeholder, teacher and student relationship in the College of Computer and Information Sciences is explained. A working model aiming at enhancement and monitoring of the output quality and meeting the demands of the stakeholders in the Kingdom of Saudi Arabia is introduced. Being inspired by the university mission and its social responsibility, the college mission is implemented and directed as the key engine in each program and course objectives. A strategic plan for the college probes and control the college output using a real-time monitoring and a swat team engagement. Each program has a five-year action plan which inspires and controls the development of the college output and measures the key performance indicators for the education-research-community service processes. The college programs are involved in highly demanding national and international academic accreditation processes which acquire certain requirements and measure the development of several performance indicators for the academic as well as community service activities. Each course folder contains information on the objectives of the course, its impact on the overall mission of the college and its targeted skills required by key stakeholders. Extensive exposure of the system elements of education and training to the integral culture gifted by the diversity of its staff and students has proven successful

    A Proposed Quality Assurance Intelligent Model for Higher Education Institutions in Saudi Arabia

    Get PDF
    Recent growth and demands for dealing with increasing complexity in management, evaluation, and accreditation of higher educational institutions have led keynote academic institutions and higher education authorities to adopt and try nonconventional solutions known to business firms to account for massive data management. The development in new practices and merging technology for analytics and information management have offered different solutions such as data warehousing, big data, and business intelligence. Such solutions are gradually being installed in a number of renown universities. Due to the difference between the two firms (higher education and business industry) in nature and aims, tailor-made solutions are needed. This paper shares authors' experience in designing and implementing an educational information system in the College of Computers and Information systems at King Saud University, Saudi Arabia. The paper also highlights differences between educational intelligence and business intelligence systems. Higher education implementation aspects ensuring suitable data query service to ease the running of high educational institutions are discussed and recognized

    A System for True and False Memory Prediction Based on 2D and 3D Educational Contents and EEG Brain Signals

    Get PDF
    We studied the impact of 2D and 3D educational contents on learning and memory recall using electroencephalography (EEG) brain signals. For this purpose, we adopted a classification approach that predicts true and false memories in case of both short term memory (STM) and long term memory (LTM) and helps to decide whether there is a difference between the impact of 2D and 3D educational contents. In this approach, EEG brain signals are converted into topomaps and then discriminative features are extracted from them and finally support vector machine (SVM) which is employed to predict brain states. For data collection, half of sixty-eight healthy individuals watched the learning material in 2D format whereas the rest watched the same material in 3D format. After learning task, memory recall tasks were performed after 30 minutes (STM) and two months (LTM), and EEG signals were recorded. In case of STM, 97.5% prediction accuracy was achieved for 3D and 96.6% for 2D and, in case of LTM, it was 100% for both 2D and 3D. The statistical analysis of the results suggested that for learning and memory recall both 2D and 3D materials do not have much difference in case of STM and LTM

    An empirical study on SAJQ (Sorting Algorithm for Join Queries)

    Get PDF
    Most queries that applied on database management systems (DBMS) depend heavily on the performance of the used sorting algorithm. In addition to have an efficient sorting algorithm, as a primary feature, stability of such algorithms is a major feature that is needed in performing DBMS queries. In this paper, we study a new Sorting Algorithm for Join Queries (SAJQ) that has both advantages of being efficient and stable. The proposed algorithm takes the advantage of using the m-way-merge algorithm in enhancing its time complexity. SAJQ performs the sorting operation in a time complexity of O(nlogm), where n is the length of the input array and m is number of sub-arrays used in sorting. An unsorted input array of length n is arranged into m sorted sub-arrays. The m-way-merge algorithm merges the sorted m sub-arrays into the final output sorted array. The proposed algorithm keeps the stability of the keys intact. An analytical proof has been conducted to prove that, in the worst case, the proposed algorithm has a complexity of O(nlogm). Also, a set of experiments has been performed to investigate the performance of the proposed algorithm. The experimental results have shown that the proposed algorithm outperforms other Stable–Sorting algorithms that are designed for join-based queries

    A Shallow Convolutional Learning Network for Classification of Cancers Based on Copy Number Variations

    No full text
    Genomic copy number variations (CNVs) are among the most important structural variations. They are linked to several diseases and cancer types. Cancer is a leading cause of death worldwide. Several studies were conducted to investigate the causes of cancer and its association with genomic changes to enhance its management and improve the treatment opportunities. Classification of cancer types based on the CNVs falls in this category of research. We reviewed the recent, most successful methods that used machine learning algorithms to solve this problem and obtained a dataset that was tested by some of these methods for evaluation and comparison purposes. We propose three deep learning techniques to classify cancer types based on CNVs: a six-layer convolutional net (CNN6), residual six-layer convolutional net (ResCNN6), and transfer learning of pretrained VGG16 net. The results of the experiments performed on the data of six cancer types demonstrated a high accuracy of 86% for ResCNN6 followed by 85% for CNN6 and 77% for VGG16. The results revealed a lower prediction accuracy for one of the classes (uterine corpus endometrial carcinoma (UCEC)). Repeating the experiments after excluding this class reveals improvements in the accuracies: 91% for CNN6 and 92% for Res CNN6. We observed that UCEC and ovarian serous carcinoma (OV) share a considerable subset of their features, which causes a struggle for learning in the classifiers. We repeated the experiment again by balancing the six classes through oversampling of the training dataset and the result was an enhancement in both overall and UCEC classification accuracies

    PSO-Based Feature Selection for Arabic Text Summarization

    No full text
    Feature-based approaches play an important role and are widely applied in extractive summarization. In this paper, we use particle swarm optimization (PSO) to evaluate the effectiveness of different state-of-the-art features used to summarize Arabic text. The PSO is trained on the Essex Arabic summaries corpus data to determine the best particle that represents the most appropriate simple/combination of eight informative/structure features used regularly by Arab summarizers. Based on the elected features and their relevant weights in each PSO iteration, the input text sentences are scored and ranked to extract the top ranking sentences in the form of an output summary. The output summary is then compared with a reference summary using the cosine similarity function as the fitness function. The experimental results illustrate that Arabs summarize texts simply, focusing on the first sentence of each paragraph

    A Novel Vertical Fragmentation, Replication and Allocation Model in DDBSs

    No full text
    Modern database systems are commonly distributed, and data is kept at isolated locations (sites). The various sites are connected through communications links, which may be of low speed resulting in bottlenecks for data transfer between sites. Data replication is considered as one of the effective methods in dealing with such situations to achieve improved performance in distributed database systems (DDBSs). In this work, authors explore a new model for improving performance in distributed database environment by using a vertical fragmentation method along with a novel replication and allocation techniques. The solution procedure consists of a new vertical fragmentation model to fragment a relation and two phases of allocation of fragments to nodes. The paper discusses the tradeoffs between the different scenarios for finding an optimal way of deciding on attribute allocation to sites by evaluating performance based on the collected requirements. This model will significantly reduce communication cost and query response time in DDBSs

    Transforming an imperative design into an object-oriented design

    No full text
    Computer Science Department, KingSaud University P.O. Box 51178, Riyadh 11543, Saudi ArabiaMost of the traditional and legacy systems were designed using traditional methodologies such as Structured Analysis/Structured Design (SA/SO) methodology. Design of such a system is called an imperative design. After the introduction of the object-oriented technology, there are compelling reasons to redevelop those systems using this new technology to benefit from its merits. To redevelop them, there are two possible choices: either develop them from scratch using some object-oriented methodology, or use the available design documents (i.e., imperative design) of those systems and transform their designs into object-oriented designs. The second choice clearly results in saving both the development cost and time. This paper reports on an effort to build support for the second choice mentioned above. We started our effort in 1992 and proposed a framework of a redesign methodology. Our proposed redesign methodology, i.e., imperative design to object-oriented design (10-000), transforms a given imperative design of an already implemented system into an object-oriented design using the design documents of the system. The methodology works in four phases and they are presented formally. We also illustrate the methodology with a case study

    A Novel Vertical Fragmentation, Replication and Allocation Model in DDBSs

    No full text
    Abstract: Modern database systems are commonly distributed, and data is kept at isolated locations (sites). The various sites are connected through communications links, which may be of low speed resulting in bottlenecks for data transfer between sites. Data replication is considered as one of the effective methods in dealing with such situations to achieve improved performance in distributed database systems (DDBSs). In this work, authors explore a new model for improving performance in distributed database environment by using a vertical fragmentation method along with a novel replication and allocation techniques. The solution procedure consists of a new vertical fragmentation model to fragment a relation and two phases of allocation of fragments to nodes. The paper discusses the tradeoffs between the different scenarios for finding an optimal way of deciding on attribute allocation to sites by evaluating performance based on the collected requirements. This model will significantly reduce communication cost and query response time in DDBSs
    • …
    corecore