194 research outputs found

    A design proposal of an online corpus-driven dictionary of Portuguese for University Students

    Get PDF
    University students are expected to read and write academic texts as part of typical literacy practices in higher education settings. Hyland (2009, p. viii-ix) states that meeting these literacy demands involves “learning to use language in new ways”. In order to support the mastery of written academic Portuguese, the primary aim of this PhD research was to propose a design of an online corpus-driven dictionary of Portuguese for university students (DOPU) attending Portuguese-medium institutions, speakers of Brazilian Portuguese (BP) and European Portuguese (EP), either as a mother tongue or as an additional language. The semi-automated approach to dictionary-making (Gantar et al., 2016), which is the latest method for dictionary compilation and had never been employed for Portuguese, was tested as a means of provision of lexical content that would serve as a basis for compiling entries of DOPU. It consists of automatic extraction of data from the corpus and import into dictionary writing system, where lexicographers then analyse, validate and edit the information. Thus, evaluation of this method for designing DOPU was a secondary goal of this research. The procedure was performed on the Sketch Engine (Kilgarriff et al., 2004) corpus tool and the dictionary writing system used was iLex (Erlandsen, 2010). A number of new resources and tools were created especially for the extraction, given the unsuitability of the existing ones. These were: a 40 million-word corpus of academic texts (CoPEP), balanced between BP and EP and covering six areas of knowledge, a sketch grammar, and GDEX configurations for academic Portuguese. Evaluation of the adoption of the semi-automated approach in the context of the DOPU design indicated that although further development of these brand-new resources and tools, as well as the procedure itself, would greatly contribute to increasing the quality of DOPU’s lexical content, the extracted data can already be used as a basis for entry writing. The positive results of the experiment also suggest that this approach should be highly beneficial to other lexicographic projects of Portuguese as well.No ensino superior, espera-se que estudantes participem, em maior ou menor extensão, em atividades de leitura e escrita de textos que tipicamente circulam no contexto universitário, como artigos, livros, exames, ensaios, monografias, projetos, trabalhos de conclusão de curso, dissertações, teses, entre outros. Contudo, essas práticas costumam se apresentar como verdadeiros desafios aos alunos, que não estão familiarizados com esses novos gêneros discursivos. Conforme Hyland (2009, p. viii-ix), a condição para se ter sucesso nessas práticas é “aprender a usar a língua de novas maneiras”. A linguagem acadêmica é objeto de pesquisa há muitos anos, sendo especialmente desenvolvida no âmbito da língua inglesa. Se por um lado, durante um longo período todas as atenções estavam voltadas para o English for Academic Purposes (EAP) (inglês para fins acadêmicos), tendo em vista o incomparável apelo comercial dessa área, mais recentemente tem-se entendido que falantes de inglês como língua materna também precisam aprender inglês acadêmico, pois, como dito acima, trata-se de uma nova maneira de usar a língua, que os estudantes universitários desconhecem. Nesse sentido, é natural que a grande maioria de matérias pedagógicos como livros, manuais, gramáticas, listas de palavras e dicionários, por exemplo, sejam produzidos para o contexto de uso da língua inglesa. Assim como o inglês e tantas outras línguas, o português também é usado em universidades como língua na e pela qual se constrói conhecimento. Aliás, nos últimos 15 anos, temos vivenciado um fenômeno de expansão do acesso ao ensino universitário no Brasil, paralelamente a um grande aumento da presença de alunos estrangeiros fazendo ensino superior no Brasil e em Portugal, o que reforça a natureza do português como língua de construção e difusão científica. É de se saudar os esforços e as medidas de política linguística da Comunidade dos Países de Língua Portuguesa (CPLP) para apoiar e fomentar o português como língua da ciência. Apesar dessa clara importância do português acadêmico, sabemos que sua presença como objeto de estudo de uma área específica ainda é bastante restrita. Tem-se observado algum crescimento no que diz respeito à abordagem discursiva da linguagem acadêmica; contudo, descrições ao nível léxico-gramatical ainda são bastante escassas. Em especial, no que concerne recursos lexicográficos como auxiliares pedagógicos, a existência de um dicionário de português acadêmico especialmente criado para atender as necessidades de estudantes universitários é desconhecida. Nesse sentido, tendo em vista a demanda apresentada acima e a lacuna nos estudos atuais, a presente pesquisa de doutorado buscou colaborar tanto com o campo dos recursos ao ensino de português acadêmico quanto com o de elaboração de recursos lexicográficos através da proposta de desenho de um dicionário online corpus-driven de português para estudantes universitários (DOPU). Baseando-se em uma perspectiva de português como língua pluricêntrica, este dicionário contempla as variedades português brasileiro (PB) e europeu (PE). Além disso, o público-alvo se constitui por falantes de português como língua materna e como língua adicional. Para a construção do desenho, adotou-se a mais moderna abordagem de compilação de dicionários atualmente existente, qual seja, a semi-automated approach to dictionary-making (Gantar et al., 2016). Esse método consiste na extração automática de dados de um corpus e importação para um sistema de escrita de dicionários, no qual lexicógrafos analisam, editam e validam as informações que foram automaticamente pré-organizadas nos campos da entrada conforme definições previamente estabelecidas. Esta abordagem é revolucionária no sentido em que o ponto de partida da análise lexical do corpus não mais se dá na ferramenta de análise de corpus, mas sim diretamente no sistema de escrita de dicionários. Experimentar essa abordagem no desenvolvimento do desenho do DOPU constitui-se em um objetivo secundário desta pesquisa de doutorado, uma vez que tal método nunca foi aplicado para a construção de dicionários de português. Os programas utilizados para a aplicação do procedimento de extração foram o Sketch Engine (SkE) (Kilgarriff et al., 2004), provavelmente a mais sofisticada ferramenta de criação, análise e manutenção de corpus da atualidade, e o iLex (Erlandsen, 2010), um sistema de escrita de dicionários bastante flexível e com alta capacidade de processamento de dados. Para a implementação da abordagem, são necessários: um corpus anotado com classes de palavra; uma sketch grammar (trata-se de um arquivo com relações gramaticais e diretivas de processamento para o sistema do SkE computar diferentes tipos de relações através de cálculos estáticos); uma configuração de GDEX, isto é, Good Dictionary Examples – bons exemplos para dicionários (trata-se de uma configuração com classificadores para avaliar frases e atribuir pontuações conforme os critérios estabelecidos); e definições de parâmetros (frequência mínima dos colocados e das relações gramaticais). Tendo em vista a inadequação de corpora de português, bem como da sketch grammar e do GDEX existentes para o português, em função do propósito dessa extração de dados, qual seja, a compilação de entradas para o DOPU, foi necessário elaborar novos recursos. Foi compilado o Corpus de Português Escrito em Periódicos (CoPEP), com 40 milhões de palavras, equilibrado entre as variedades PB e PE, e que cobre seis áreas de conhecimento. Os metadados do corpus foram detalhadamente anotados, permitindo fazer pesquisas avançadas. É o primeiro corpus internacional de português acadêmico de que temos notícia. De forma a padronizar a análise lexical e diminuir desequilíbrios na contagem estatística, o CoPEP foi pós-processado com o conversor Lince de forma a atualizar as ortografias de cada variedade conforme a determinação do Acordo Ortográfico da Língua Portuguesa, de 1990. Uma sketch grammar foi especialmente elaborada para o CoPEP, e, nesse sentido, pode ser aplicada a outros corpora de português anotados pelo mesmo anotador. Optou-se por usar o anotador oferecido por padrão no SkE, qual seja, o Freeling v3. Criou-se uma sketch grammar com mais e mais precisas relações gramaticais do que aquela oferecida por padrão pelo SkE. Assim, usuários trabalhando com corpora de português anotados com Freeling no SkE poderão usar a minha versão, que já está disponível no Sketch Engine. Uma configuração de GDEX havia sido produzida para fornecer exemplos para a compilação do Oxford Portuguese Dicionary (2015). No entanto, por ser bastante geral, elaborada para um corpus Web e por buscar selecionar exemplos para um dicionário bilíngue português-inglês/inglês-português, julgou-se mais apropriado criar uma configuração completamente nova. Assim, desenvolvi tal recurso, tendo em vista as características de uso da língua como apresentadas no CoPEP e o perfil do usuário do DOPU. O procedimento de extração automática de dados do CoPEP e importação para o iLex tomou como base o procedimento usado para a criação de dicionários de esloveno (criadores desse método), fazendo-se adaptações. Acrescentaram-se dois elementos ao processo de extração: o longest-commonest match (LCM), que mostra a realização mais comum do par keyword e colocado, ajudando a entender o uso mais típico das colocações; e sugestões para atribuição de etiquetas com variedade típica, tanto para a keyword quanto para o colocado. A avaliação do processo de escrita de entradas-piloto indicou que o método de extração de dados do CoPEP e importação para o iLex foi extremamente positivo, dado que a análise lexical pôde ser bastante sofisticada sem demandar o tempo rotineiro necessário quando se parte das linhas de concordância para elaboração de entradas. Alguns dados que nesta pesquisa não foram extraídos automaticamente e que tiveram que ser analisados manualmente na ferramenta de corpus poderão ser incluídos numa próxima versão do procedimento. Análise do processo de criação dos recursos necessários indicou que aprimoramentos podem ser feitos, assim aumentando a acurácia da extração. Espera-se que o desenho de dicionário online corpus-driven de português para estudantes universitários proposto por esta pesquisa de doutorado sirva como base para o desenvolvimento de outras pesquisas relacionadas de forma que a sustentar a elaboração do DOPU

    Challenges to knowledge representation in multilingual contexts

    Get PDF
    To meet the increasing demands of the complex inter-organizational processes and the demand for continuous innovation and internationalization, it is evident that new forms of organisation are being adopted, fostering more intensive collaboration processes and sharing of resources, in what can be called collaborative networks (Camarinha-Matos, 2006:03). Information and knowledge are crucial resources in collaborative networks, being their management fundamental processes to optimize. Knowledge organisation and collaboration systems are thus important instruments for the success of collaborative networks of organisations having been researched in the last decade in the areas of computer science, information science, management sciences, terminology and linguistics. Nevertheless, research in this area didn’t give much attention to multilingual contexts of collaboration, which pose specific and challenging problems. It is then clear that access to and representation of knowledge will happen more and more on a multilingual setting which implies the overcoming of difficulties inherent to the presence of multiple languages, through the use of processes like localization of ontologies. Although localization, like other processes that involve multilingualism, is a rather well-developed practice and its methodologies and tools fruitfully employed by the language industry in the development and adaptation of multilingual content, it has not yet been sufficiently explored as an element of support to the development of knowledge representations - in particular ontologies - expressed in more than one language. Multilingual knowledge representation is then an open research area calling for cross-contributions from knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences. This workshop joined researchers interested in multilingual knowledge representation, in a multidisciplinary environment to debate the possibilities of cross-fertilization between knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences applied to contexts where multilingualism continuously creates new and demanding challenges to current knowledge representation methods and techniques. In this workshop six papers dealing with different approaches to multilingual knowledge representation are presented, most of them describing tools, approaches and results obtained in the development of ongoing projects. In the first case, Andrés Domínguez Burgos, Koen Kerremansa and Rita Temmerman present a software module that is part of a workbench for terminological and ontological mining, Termontospider, a wiki crawler that aims at optimally traverse Wikipedia in search of domainspecific texts for extracting terminological and ontological information. The crawler is part of a tool suite for automatically developing multilingual termontological databases, i.e. ontologicallyunderpinned multilingual terminological databases. In this paper the authors describe the basic principles behind the crawler and summarized the research setting in which the tool is currently tested. In the second paper, Fumiko Kano presents a work comparing four feature-based similarity measures derived from cognitive sciences. The purpose of the comparative analysis presented by the author is to verify the potentially most effective model that can be applied for mapping independent ontologies in a culturally influenced domain. For that, datasets based on standardized pre-defined feature dimensions and values, which are obtainable from the UNESCO Institute for Statistics (UIS) have been used for the comparative analysis of the similarity measures. The purpose of the comparison is to verify the similarity measures based on the objectively developed datasets. According to the author the results demonstrate that the Bayesian Model of Generalization provides for the most effective cognitive model for identifying the most similar corresponding concepts existing for a targeted socio-cultural community. In another presentation, Thierry Declerck, Hans-Ulrich Krieger and Dagmar Gromann present an ongoing work and propose an approach to automatic extraction of information from multilingual financial Web resources, to provide candidate terms for building ontology elements or instances of ontology concepts. The authors present a complementary approach to the direct localization/translation of ontology labels, by acquiring terminologies through the access and harvesting of multilingual Web presences of structured information providers in the field of finance, leading to both the detection of candidate terms in various multilingual sources in the financial domain that can be used not only as labels of ontology classes and properties but also for the possible generation of (multilingual) domain ontologies themselves. In the next paper, Manuel Silva, António Lucas Soares and Rute Costa claim that despite the availability of tools, resources and techniques aimed at the construction of ontological artifacts, developing a shared conceptualization of a given reality still raises questions about the principles and methods that support the initial phases of conceptualization. These questions become, according to the authors, more complex when the conceptualization occurs in a multilingual setting. To tackle these issues the authors present a collaborative platform – conceptME - where terminological and knowledge representation processes support domain experts throughout a conceptualization framework, allowing the inclusion of multilingual data as a way to promote knowledge sharing and enhance conceptualization and support a multilingual ontology specification. In another presentation Frieda Steurs and Hendrik J. Kockaert present us TermWise, a large project dealing with legal terminology and phraseology for the Belgian public services, i.e. the translation office of the ministry of justice, a project which aims at developing an advanced tool including expert knowledge in the algorithms that extract specialized language from textual data (legal documents) and whose outcome is a knowledge database including Dutch/French equivalents for legal concepts, enriched with the phraseology related to the terms under discussion. Finally, Deborah Grbac, Luca Losito, Andrea Sada and Paolo Sirito report on the preliminary results of a pilot project currently ongoing at UCSC Central Library, where they propose to adapt to subject librarians, employed in large and multilingual Academic Institutions, the model used by translators working within European Union Institutions. The authors are using User Experience (UX) Analysis in order to provide subject librarians with a visual support, by means of “ontology tables” depicting conceptual linking and connections of words with concepts presented according to their semantic and linguistic meaning. The organizers hope that the selection of papers presented here will be of interest to a broad audience, and will be a starting point for further discussion and cooperation

    Proceedings of the First Workshop on Computing News Storylines (CNewsStory 2015)

    Get PDF
    This volume contains the proceedings of the 1st Workshop on Computing News Storylines (CNewsStory 2015) held in conjunction with the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (ACL-IJCNLP 2015) at the China National Convention Center in Beijing, on July 31st 2015. Narratives are at the heart of information sharing. Ever since people began to share their experiences, they have connected them to form narratives. The study od storytelling and the field of literary theory called narratology have developed complex frameworks and models related to various aspects of narrative such as plots structures, narrative embeddings, characters’ perspectives, reader response, point of view, narrative voice, narrative goals, and many others. These notions from narratology have been applied mainly in Artificial Intelligence and to model formal semantic approaches to narratives (e.g. Plot Units developed by Lehnert (1981)). In recent years, computational narratology has qualified as an autonomous field of study and research. Narrative has been the focus of a number of workshops and conferences (AAAI Symposia, Interactive Storytelling Conference (ICIDS), Computational Models of Narrative). Furthermore, reference annotation schemes for narratives have been proposed (NarrativeML by Mani (2013)). The workshop aimed at bringing together researchers from different communities working on representing and extracting narrative structures in news, a text genre which is highly used in NLP but which has received little attention with respect to narrative structure, representation and analysis. Currently, advances in NLP technology have made it feasible to look beyond scenario-driven, atomic extraction of events from single documents and work towards extracting story structures from multiple documents, while these documents are published over time as news streams. Policy makers, NGOs, information specialists (such as journalists and librarians) and others are increasingly in need of tools that support them in finding salient stories in large amounts of information to more effectively implement policies, monitor actions of “big players” in the society and check facts. Their tasks often revolve around reconstructing cases either with respect to specific entities (e.g. person or organizations) or events (e.g. hurricane Katrina). Storylines represent explanatory schemas that enable us to make better selections of relevant information but also projections to the future. They form a valuable potential for exploiting news data in an innovative way.JRC.G.2-Global security and crisis managemen

    Mining app reviews to support software engineering

    Get PDF
    The thesis studies how mining app reviews can support software engineering. App reviews —short user reviews of an app in app stores— provide a potentially rich source of information to help software development teams maintain and evolve their products. Exploiting this information is however difficult due to the large number of reviews and the difficulty in extracting useful actionable information from short informal texts. A variety of app review mining techniques have been proposed to classify reviews and to extract information such as feature requests, bug descriptions, and user sentiments but the usefulness of these techniques in practice is still unknown. Research in this area has grown rapidly, resulting in a large number of scientific publications (at least 182 between 2010 and 2020) but nearly no independent evaluation and description of how diverse techniques fit together to support specific software engineering tasks have been performed so far. The thesis presents a series of contributions to address these limitations. We first report the findings of a systematic literature review in app review mining exposing the breadth and limitations of research in this area. Using findings from the literature review, we then present a reference model that relates features of app review mining tools to specific software engineering tasks supporting requirements engineering, software maintenance and evolution. We then present two additional contributions extending previous evaluations of app review mining techniques. We present a novel independent evaluation of opinion mining techniques using an annotated dataset created for our experiment. Our evaluation finds lower effectiveness than initially reported by the techniques authors. A final part of the thesis, evaluates approaches in searching for app reviews pertinent to a particular feature. The findings show a general purpose search technique is more effective than the state-of-the-art purpose-built app review mining techniques; and suggest their usefulness for requirements elicitation. Overall, the thesis contributes to improving the empirical evaluation of app review mining techniques and their application in software engineering practice. Researchers and developers of future app mining tools will benefit from the novel reference model, detailed experiments designs, and publicly available datasets presented in the thesis

    Software Engineering in the Age of App Stores: Feature-Based Analyses to Guide Mobile Software Engineers

    Get PDF
    Mobile app stores are becoming the dominating distribution platform of mobile applications. Due to their rapid growth, their impact on software engineering practices is not yet well understood. There has been no comprehensive study that explores the mobile app store ecosystem's effect on software engineering practices. Therefore, this thesis, as its first contribution, empirically studies the app store as a phenomenon from the developers' perspective to investigate the extent to which app stores affect software engineering tasks. The study highlights the importance of a mobile application's features as a deliverable unit from developers to users. The study uncovers the involvement of app stores in eliciting requirements, perfective maintenance and domain analysis in the form of discoverable features written in text form in descriptions and user reviews. Developers discover possible features to include by searching the app store. Developers, through interviews, revealed the cost of such tasks given a highly prolific user base, which major app stores exhibit. Therefore, the thesis, in its second contribution, uses techniques to extract features from unstructured natural language artefacts. This is motivated by the indication that developers monitor similar applications, in terms of provided features, to understand user expectations in a certain application domain. This thesis then devises a semantic-aware technique of mobile application representation using textual functionality descriptions. This representation is then shown to successfully cluster mobile applications to uncover a finer-grained and functionality-based grouping of mobile apps. The thesis, furthermore, provides a comparison of baseline techniques of feature extraction from textual artefacts based on three main criteria: silhouette width measure, human judgement and execution time. Finally, this thesis, in its final contribution shows that features do indeed migrate in the app store beyond category boundaries and discovers a set of migratory characteristics and their relationship to price, rating and popularity in the app stores studied

    Concordancing Software in Practice: An investigation of searches and translation problems across EU official languages

    Get PDF
    2011/2012The present work reports on an empirical study aimed at investigating translation problems across multiple language pairs. In particular, the analysis is aimed at developing a methodological approach to study concordance search logs taken as manifestations of translation problems and, in a wider perspective, information needs. As search logs are a relatively unexplored data type within translation process research, a controlled environment was needed in order to carry out this exploratory analysis without incurring in additional problems caused by an excessive amount of variables. The logs were collected at the European Commission and contain a large volume of searches from English into 20 EU languages that staff translators working for the EU translation services submitted to an internally available multilingual concordancer. The study attempts to (i) identify differences in the searches (i.e. problems) based on the language pairs; and (ii) group problems into types. Furthermore, the interactions between concordance users and the tool itself have been examined to provide a translation-oriented perspective on the domain of Human-Computer Interaction. The study draws on the literature on translation problems, Information Retrieval and Web search log analysis, moving from the assumption that in the perspective of concordance searching, translation problems are best interpreted as information needs for which the concordancer is chosen as a form of external support. The structure of a concordance search is examined in all its parts and is eventually broken down into two main components: the 'Search Strategy' component and the 'Problem Unit' component. The former was further analyzed using a mainly quantitative approach, whereas the latter was addressed from a more qualitative perspective. The analysis of the Problem Unit takes into account the length of the search strings as well as their content and linguistic form, each addressed with a different methodological approach. Based on the understanding of concordance searches as manifestations of translation problems, a user- centered classification of translation-oriented information needs is developed to account for as many "problem" scenarios as possible. According to the initial expectations, different languages should experience different problems. This assumption could not be verified: the 20 different language pairs considered in this study behaved consistently on many levels and, due to the specific research environment, no definite conclusions could be reached as regards the role of the language family criterion for problem identification. The analysis of the 'Problem Unit' component has highlighted automatized support for translating Named Entities as a possible area for further research in translation technology and the development of computer-based translation support tools. Finally, the study indicates (concordance) search logs as an additional data type to be used in experiments on the translation process and for triangulation purposes, while drawing attention on the concordancer as a type of translation aid to be further fine-tuned for the needs of professional translators. ***Il presente lavoro consiste in uno studio empirico sui problemi di traduzione che emergono quando si considerano diverse coppie di lingue e in particolare sviluppa una metodologia per analizzare i log di ricerche effettuate dai traduttori in un software di concordanza (concordancer) quali manifestazioni di problemi di traduzione che, visti in una prospettiva più ampia, si possono anche considerare dei "bisogni d'informazione" (information needs). I log di ricerca costituiscono una tipologia di dato ancora relativamente nuova e inesplorata nell'ambito delle ricerche sul processo di traduzione e pertanto è emersa la necessità di svolgere un'analisi di tipo esplorativo in un contesto controllato onde evitare le problematiche aggiuntive derivanti da un numero eccessivo di variabili. I log di ricerca sono stati raccolti presso la Commissione europea e contengono quantitativi ingenti di ricerche effettuate dai traduttori impiegati presso i servizi di traduzione dell'Unione europea in un concordancer multilingue disponibile come risorsa interna. L'analisi si propone di individuare le differenze nelle ricerche (e quindi nei problemi) a seconda della coppia di lingue selezionata e di raggruppare tali problemi in tipologie. Lo studio fornisce inoltre informazioni sulle modalità di interazione tra gli utenti e il software nell'ambito di un contesto traduttivo, contribuendo alla ricerca nel campo dell'interazione uomo-macchina (Human-Computer Interaction). Il presente studio trae spunto dalla letteratura sui problemi di traduzione, sull'estrazione d'informazioni (Information Retrieval) e sulle ricerche nel Web e si propone di considerare i problemi di traduzione associati all'impiego di uno strumento per le concordanze quali bisogni di informazione per i quali lo strumento di concordanze è stato scelto come forma di supporto esterna. Ogni singola ricerca è stata esaminata e scomposta in due elementi principali: la "strategia di ricerca" (Search Strategy) e l'"unità problematica" (Problem Unit) che vengono studiati rispettivamente usando approcci prevalentemente di tipo quantitativo e qualitativo. L'analisi dell'unità problematica prende in considerazione la lunghezza, il contenuto e la forma linguistica delle stringhe, analizzando ciascuna con una metodologia di lavoro appositamente studiata. Avendo interpretato le ricerche di concordanze quali manifestazioni di bisogni d'informazione, l'analisi prosegue con la definizione di una serie di categorie di bisogni d'informazione (o problemi) legati alla traduzione e incentrati sul singolo utente al fine di includere quanti più scenari di ricerca possibile. L'assunto iniziale in base al quale lingue diverse manifesterebbero problemi diversi non è stato verificato empiricamente in quanto le 20 coppie di lingue esaminate hanno mostrato comportamenti alquanto similari nei diversi livelli di analisi. Vista la peculiarità dei dati utilizzati e la specificità dell'Unione europea come contesto di ricerca, non è stato possibile ottenere conclusioni definitive in merito al ruolo delle famiglie linguistiche quali indicatori di problemi, rispetto ad altri criteri di classificazione. L'analisi dell'unità problematica ha evidenziato le entità denominate (Named Entities) quale possibile oggetto di futuri progetti di ricerca nell'ambito delle tecnologie della traduzione. Oltre a offrire un contributo per i futuri sviluppi nell'ambito dei supporti informatici alla traduzione, con il presente studio si è voluto altresì presentare i log delle ricerche (di concordanze) quale tipologia aggiuntiva di dati per lo studio del processo di traduzione e per la triangolazione dei risultati empirico-sperimentali, cercando anche di suggerire possibili tratti migliorativi dei software di concordanza sulla base dei bisogni di informazione riscontrati nei traduttori.XXV Ciclo198

    Analysing film content : a text-based approach

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Neural models of language use:Studies of language comprehension and production in context

    Get PDF
    Artificial neural network models of language are mostly known and appreciated today for providing a backbone for formidable AI technologies. This thesis takes a different perspective. Through a series of studies on language comprehension and production, it investigates whether artificial neural networks—beyond being useful in countless AI applications—can serve as accurate computational simulations of human language use, and thus as a new core methodology for the language sciences
    corecore