36 research outputs found

    L'informatique dans l'enseignement, la recherche et la documentation italiens en France et en Italie

    No full text
    Sommaire du numéro :http://archive-edutice.ccsd.cnrs.fr/edutice-00000838En Italie, on remarque, jusqu'à l'année dernière, l'absence d'une stratégie globale d'introduction de l'informatique dans le domaine de l'éducation, à part le projet IRIS, conçu en 1982 par le CEDE - Centro Europeo dell'Educazione, qui prévoyait, pour 1986, la réalisation d'une vingtaine de didacticiels d'une durée de 20 à 30 heures chacun, destinés à "introduire les sciences et les techniques de l'information dans l'école primaire et secondaire".Comme l'on peut en juger d'après une récente publication de l'AICA, qui vient de faire un premier recensement national des didacticiels produits dans les différentes disciplines, cette situation n'a pas entravé l'élaboration de logiciels destinés à l'éducation : on voit que de nombreuses expériences didactiques, surtout individuelles, ont été ainsi encouragées, parfois même au prix d'un certain gaspillage de ressources humaines et économiques. En France, Dans le domaine de l'enseignement de la langue italienne, différents exercices grammaticaux, souvent de type structural, ont été réalisés par des enseignants du second degré: il s'agit en général de réalisations individuelles qui ont été conçues épisodiquement et programmées non seulement en différentes versions du LSE, mais aussi sur des ordinateurs incompatibles entre eux. Par conséquent, ces didacticiels n'arrivent pas à sortir de l'enceinte de l'établissement scolaire qui les a vus naître, à l'exception de quelques-uns qui ont été recueillis par certains CRDP et mis à la disposition des autres institutions en possession du même matériel. Par ailleurs ces dernières années, en Italie comme en France, les systèmes auteurs sont proposés comme les seuls logiciels qui puissent rendre possible une élaboration individuelle et décentralisée de didacticiels par des non-informaticiens, face à une production centralisée dans les mains d'un nombre restreint de spécialistes

    G.: Lightweight approach to the cold start problem in the video lecture recommendation

    No full text
    Abstract. In this paper we present our participation as SWAPTeam at the ECML/PKDD 2011- Discovery challenge for the task on the cold start problem focused on making recommendations for new video lectures. The main idea is to use a content-based approach because it is less sensitive to the cold start problem that is commonly associated with pure collaborative filtering recommenders. The strategy for the integration by hybridization and the scalability performance affect the developed components.

    Cold Start Problem: a Lightweight Approach at ECML/PKDD 2011 - Discovery Challenge

    No full text
    The paper presents our participation [5] at the ECML/PKDD 2011 - Discovery challenge for the task on the cold start problem. The challenge dataset was gathered from VideoLectures.Net web site that exploits a Recommender System (RS) to guide users during the access to its large multimedia repository of video lectures. Cold start concerns performance issues when new items and new users should be handled by a RS and it is commonly associated with pure collaborative ltering- based RSs. The proposed approach exploits the challenge data to predict the frequencies of pairs of cold items and old items and then the highest values are used to provide recommendations

    Cold Start Problem: a Lightweight Approach

    No full text
    The chapter presents the SWAPTeam participation at the ECML/PKDD 2011 - Discovery Challenge for the task on the cold start problem focused on making recommendations for new video lectures. The developed solution uses a content-based approach because it is less sensitive to the cold start problem that is commonly associated with pure collaborative filtering recommenders. The Challenge organizers encouraged solutions that can actually affect VideoLecture.net, thus the proposed integration strategy is the hybridization by switching. In addition, the surrounding idea for the proposed solution is that providing recommendations about cold items remains a chancy task, thus a computational resource curtailment for such task is a reasonable strategy to control performance trade-off of a day-to-day running system. The main contribution concerns about the compromise between recommendation accuracy and scalability performance of proposed approach

    Lexical and semantic resources for NLP: from words to meanings

    No full text
    A user expresses her information need through words with a precise meaning, but from the machine point of view this meaning does not come with the word. A further step is needful to automatically associate it to the words. Techniques that process human language are required and also linguistic and semantic knowledge, stored within distinct and heterogeneous resources, which play an important role during all Natural Language Processing (NLP) steps. Resources management is a challenging problem, together with the correct association between URIs coming from the resources and meanings of the words. This work presents a service that, given a lexeme (an abstract unit of morphological analysis in linguistics, which roughly corresponds to a set of words that are different forms of the same word), returns all syntactic and semantic information collected from a list of lexical and semantic resources. The proposed strategy consists in merging data with origin from stable resources, such as WordNet, with data collected dynamically from evolving sources, such as the Web or Wikipedia. That strategy is implemented in a wrapper to a set of popular linguistic resources that provides a single point of access to them, in a transparent way to the user, to accomplish the computational linguistic problem of getting a rich set of linguistic and semantic annotations in a compact way

    Fuzzy clustering in user segmentation

    No full text
    E-Government is becoming more attentive towards providing personalized services to citizens so that they can benefit from better services with less time and effort. To develop citizen-centered services, a fundamental activity consists in mining needs and preferences of users by identifying homogeneous groups of users, also known as user segments, sharing similar characteristics. Since the same user often has characteristics shared by several segments, in this work we propose an approach based on fuzzy clustering for inferring user segments that could be properly exploited to offer personalized services that better satisfy user needs and their expectations. User segments are inferred starting from data, gathered by questionnaires, which essentially describe demographic characteristics of users. For each derived segment a user profile is defined which summarizes characteristics shared by users belonging to that segment. Results obtained on a case study are reported in the last part of the paper

    Serendipitous Encounters along Dynamically Personalized Museum Tours

    No full text
    Today Recommender Systems (RSs) are commonly used with various purposes, especially dealing with e-commerce and information filtering tools. Content-based RSs rely on the concept of similarity between items. It is a common belief that the user is interested in what is similar to what she has already bought/searched/visited. We believe that there are some contexts in which this assumption is wrong: it is the case of acquiring unsearched but still useful items or pieces of information. This is called serendipity. Our purpose is to stimulate users and facilitate these serendipitous encounters to happen. The paper presents a hybrid recommender system that joins a content-based approach and serendipitous heuristics in order to provide also surprising suggestions. The reference scenario concerns with personalized tours in a museum and serendipitous items are introduced by slight diversions on the context-aware tours. Copyright owned by the authors
    corecore