39,696 research outputs found

    Toward a collective intelligence recommender system for education

    Get PDF
    The development of Information and Communication Technology (ICT), have revolutionized the world and have moved us into the information age, however the access and handling of this large amount of information is causing valuable time losses. Teachers in Higher Education especially use the Internet as a tool to consult materials and content for the development of the subjects. The internet has very broad services, and sometimes it is difficult for users to find the contents in an easy and fast way. This problem is increasing at the time, causing that students spend a lot of time in search information rather than in synthesis, analysis and construction of new knowledge. In this context, several questions have emerged: Is it possible to design learning activities that allow us to value the information search and to encourage collective participation?. What are the conditions that an ICT tool that supports a process of information search has to have to optimize the student's time and learning? This article presents the use and application of a Recommender System (RS) designed on paradigms of Collective Intelligence (CI). The RS designed encourages the collective learning and the authentic participation of the students. The research combines the literature study with the analysis of the ICT tools that have emerged in the field of the CI and RS. Also, Design-Based Research (DBR) was used to compile and summarize collective intelligence approaches and filtering techniques reported in the literature in Higher Education as well as to incrementally improving the tool. Several are the benefits that have been evidenced as a result of the exploratory study carried out. Among them the following stand out: • It improves student motivation, as it helps you discover new content of interest in an easy way. • It saves time in the search and classification of teaching material of interest. • It fosters specialized reading, inspires competence as a means of learning. • It gives the teacher the ability to generate reports of trends and behaviors of their students, real-time assessment of the quality of learning material. The authors consider that the use of ICT tools that combine the paradigms of the CI and RS presented in this work, are a tool that improves the construction of student knowledge and motivates their collective development in cyberspace, in addition, the model of Filltering Contents used supports the design of models and strategies of collective intelligence in Higher Education.Postprint (author's final draft

    Exploiting Recurring Patterns to Improve Scalability of Parking Availability Prediction Systems

    Get PDF
    Parking Guidance and Information (PGI) systems aim at supporting drivers in finding suitable parking spaces, also by predicting the availability at driver’s Estimated Time of Arrival (ETA), leveraging information about the general parking availability situation. To do these predictions, most of the proposals in the literature dealing with on-street parking need to train a model for each road segment, with significant scalability issues when deploying a city-wide PGI. By investigating a real dataset we found that on-street parking dynamics show a high temporal auto-correlation. In this paper we present a new processing pipeline that exploits these recurring trends to improve the scalability. The proposal includes two steps to reduce both the number of required models and training examples. The effectiveness of the proposed pipeline has been empirically assessed on a real dataset of on-street parking availability from San Francisco (USA). Results show that the proposal is able to provide parking predictions whose accuracy is comparable to state-of-the-art solutions based on one model per road segment, while requiring only a fraction of training costs, thus being more likely scalable to city-wide scenarios

    SQL Query Completion for Data Exploration

    Full text link
    Within the big data tsunami, relational databases and SQL are still there and remain mandatory in most of cases for accessing data. On the one hand, SQL is easy-to-use by non specialists and allows to identify pertinent initial data at the very beginning of the data exploration process. On the other hand, it is not always so easy to formulate SQL queries: nowadays, it is more and more frequent to have several databases available for one application domain, some of them with hundreds of tables and/or attributes. Identifying the pertinent conditions to select the desired data, or even identifying relevant attributes is far from trivial. To make it easier to write SQL queries, we propose the notion of SQL query completion: given a query, it suggests additional conditions to be added to its WHERE clause. This completion is semantic, as it relies on the data from the database, unlike current completion tools that are mostly syntactic. Since the process can be repeated over and over again -- until the data analyst reaches her data of interest --, SQL query completion facilitates the exploration of databases. SQL query completion has been implemented in a SQL editor on top of a database management system. For the evaluation, two questions need to be studied: first, does the completion speed up the writing of SQL queries? Second , is the completion easily adopted by users? A thorough experiment has been conducted on a group of 70 computer science students divided in two groups (one with the completion and the other one without) to answer those questions. The results are positive and very promising

    A generic persistence model for CLP systems (and two useful implementations)

    Get PDF
    This paper describes a model of persistence in (C)LP languages and two different and practically very useful ways to implement this model in current systems. The fundamental idea is that persistence is a characteristic of certain dynamic predicates (Le., those which encapsulate state). The main effect of declaring a predicate persistent is that the dynamic changes made to such predicates persist from one execution to the next one. After proposing a syntax for declaring persistent predicates, a simple, file-based implementation of the concept is presented and some examples shown. An additional implementation is presented which stores persistent predicates in an external datábase. The abstraction of the concept of persistence from its implementation allows developing applications which can store their persistent predicates alternatively in files or databases with only a few simple changes to a declaration stating the location and modality used for persistent storage. The paper presents the model, the implementation approach in both the cases of using files and relational databases, a number of optimizations of the process (using information obtained from static global analysis and goal clustering), and performance results from an implementation of these ideas

    Distantly Labeling Data for Large Scale Cross-Document Coreference

    Full text link
    Cross-document coreference, the problem of resolving entity mentions across multi-document collections, is crucial to automated knowledge base construction and data mining tasks. However, the scarcity of large labeled data sets has hindered supervised machine learning research for this task. In this paper we develop and demonstrate an approach based on ``distantly-labeling'' a data set from which we can train a discriminative cross-document coreference model. In particular we build a dataset of more than a million people mentions extracted from 3.5 years of New York Times articles, leverage Wikipedia for distant labeling with a generative model (and measure the reliability of such labeling); then we train and evaluate a conditional random field coreference model that has factors on cross-document entities as well as mention-pairs. This coreference model obtains high accuracy in resolving mentions and entities that are not present in the training data, indicating applicability to non-Wikipedia data. Given the large amount of data, our work is also an exercise demonstrating the scalability of our approach.Comment: 16 pages, submitted to ECML 201

    Optical tomography: Image improvement using mixed projection of parallel and fan beam modes

    Get PDF
    Mixed parallel and fan beam projection is a technique used to increase the quality images. This research focuses on enhancing the image quality in optical tomography. Image quality can be defined by measuring the Peak Signal to Noise Ratio (PSNR) and Normalized Mean Square Error (NMSE) parameters. The findings of this research prove that by combining parallel and fan beam projection, the image quality can be increased by more than 10%in terms of its PSNR value and more than 100% in terms of its NMSE value compared to a single parallel beam

    Investigations using data in Alabama from ERTS-A

    Get PDF
    There are no author-identified significant results in this report

    Data mining as a tool for environmental scientists

    Get PDF
    Over recent years a huge library of data mining algorithms has been developed to tackle a variety of problems in fields such as medical imaging and network traffic analysis. Many of these techniques are far more flexible than more classical modelling approaches and could be usefully applied to data-rich environmental problems. Certain techniques such as Artificial Neural Networks, Clustering, Case-Based Reasoning and more recently Bayesian Decision Networks have found application in environmental modelling while other methods, for example classification and association rule extraction, have not yet been taken up on any wide scale. We propose that these and other data mining techniques could be usefully applied to difficult problems in the field. This paper introduces several data mining concepts and briefly discusses their application to environmental modelling, where data may be sparse, incomplete, or heterogenous
    corecore