Quelques stratégies pour l’exploitation de gros corpus en analyse des données textuelles

Abstract

Our work carried out as part of the Scriptorium project has confronted us with a variety of problems that have to be faced by analysts engaging in text-mining as applied to large heterogeneous corpora (intranet, www, document-based DB). This paper presents several solutions concerning the following points : the extraction of relevant sub-sections of the corpus, meta-data, efficient storage, historisation. We introduce two original solutions : document storage based on collections of self-describing texts with embedded meta-data in the form of mark-up (instead of a DBMS or file-based approach : full text indexing at such a scale is heavy) ; use of an extractor based on the software product TOPIC to retrieve relevant paragraphs and assemble them into homogeneous sub-corpora of exploitable size (< 10 Mega). We shall also describe the strategies we have adopted for comparing different analyses of the corpus in a historical perspective, in particular the transformation of ALCESTE class profiles into TOPIC concepts aimed at providing fixed, quantifiable measurements of the density of certain topics in the texts. This paper was given at the 4èmes Journées Internationales d’Analyse des Donnés Textuelles. Nice, France, 18-21 février 1998. It was published in the proceedings volume and is freely available at http://www.cavi.univ-paris3.fr/lexicometrica/jadt/jadt1998/JADT1998.ht

Similar works

This paper was published in LSE Research Online.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.