8,959 research outputs found

    Dialogue as Data in Learning Analytics for Productive Educational Dialogue

    Get PDF
    This paper provides a novel, conceptually driven stance on the state of the contemporary analytic challenges faced in the treatment of dialogue as a form of data across on- and offline sites of learning. In prior research, preliminary steps have been taken to detect occurrences of such dialogue using automated analysis techniques. Such advances have the potential to foster effective dialogue using learning analytic techniques that scaffold, give feedback on, and provide pedagogic contexts promoting such dialogue. However, the translation of much prior learning science research to online contexts is complex, requiring the operationalization of constructs theorized in different contexts (often face-to-face), and based on different datasets and structures (often spoken dialogue). In this paper, we explore what could constitute the effective analysis of productive online dialogues, arguing that it requires consideration of three key facets of the dialogue: features indicative of productive dialogue; the unit of segmentation; and the interplay of features and segmentation with the temporal underpinning of learning contexts. The paper thus foregrounds key considerations regarding the analysis of dialogue data in emerging learning analytics environments, both for learning-science and for computationally oriented researchers

    Many uses, many annotations for large speech corpora: Switchboard and TDT as case studies

    Full text link
    This paper discusses the challenges that arise when large speech corpora receive an ever-broadening range of diverse and distinct annotations. Two case studies of this process are presented: the Switchboard Corpus of telephone conversations and the TDT2 corpus of broadcast news. Switchboard has undergone two independent transcriptions and various types of additional annotation, all carried out as separate projects that were dispersed both geographically and chronologically. The TDT2 corpus has also received a variety of annotations, but all directly created or managed by a core group. In both cases, issues arise involving the propagation of repairs, consistency of references, and the ability to integrate annotations having different formats and levels of detail. We describe a general framework whereby these issues can be addressed successfully.Comment: 7 pages, 2 figure

    Threats to Democratic Stability: Comparing the Elections of 2016 and 1860

    Get PDF

    A Memetic Analysis of a Phrase by Beethoven: Calvinian Perspectives on Similarity and Lexicon-Abstraction

    Get PDF
    This article discusses some general issues arising from the study of similarity in music, both human-conducted and computer-aided, and then progresses to a consideration of similarity relationships between patterns in a phrase by Beethoven, from the first movement of the Piano Sonata in A flat major op. 110 (1821), and various potential memetic precursors. This analysis is followed by a consideration of how the kinds of similarity identified in the Beethoven phrase might be understood in psychological/conceptual and then neurobiological terms, the latter by means of William Calvin’s Hexagonal Cloning Theory. This theory offers a mechanism for the operation of David Cope’s concept of the lexicon, conceived here as a museme allele-class. I conclude by attempting to correlate and map the various spaces within which memetic replication occurs

    Using Computational Text Classification for Qualitative Research and Evaluation in Extension

    Get PDF
    This article introduces a process for computational text classification that can be used in a variety of qualitative research and evaluation settings. The process leverages supervised machine learning based on an implementation of a multinomial Bayesian classifier. Applied to a community of inquiry framework, the algorithm was used to identify evidence of cognitive presence, social presence, and teaching presence in the text contributions (44,000 unique posts) of more than 4,000 participants in an online environmental education course. Results indicate that computational text classification can significantly reduce labor costs and can help Extension research faculty scale, accelerate, and ensure reproducibility of their research

    Deconstructing U.S. Army Maps of Korea: A Case Study for Rethinking Historical Environmental Data

    Get PDF
    At a time when the natural world and global climate are experiencing extreme changes at unprecedented speeds, understanding these environmental changes over time is more important than ever. With advances in remote sensing technology, large amounts of information about the natural world are becoming more accessible than ever before; however, satellite-collected data are only available from 1984 onwards. To understand how land use has changed on longer timescales, researchers have turned towards archival maps as a data source. Archival maps are a rich source of environmental information; however, they are often saturated with complicated colonial histories. Maps, more so than other historical materials, can hide behind the veneer of objectivity and thus escape important interrogation. As methods that utilize archival maps become more popular, the need to critically analyze the historical and social contexts of the maps becomes even stronger. This thesis argues for a rethinking of historical environmental data through a case study of U.S. Military Maps of Korea from 1945-1954. By providing appropriate historical and social context, three maps of Seoul are deconstructed, thereby illuminating their fallibility as objective environmental sources. This case study ultimately encourages scholars to engage with environmental history more critically and think beyond the analogues dictated by current technology

    Deep learning in remote sensing: a review

    Get PDF
    Standing at the paradigm shift towards data-intensive science, machine learning techniques are becoming increasingly important. In particular, as a major breakthrough in the field, deep learning has proven as an extremely powerful tool in many fields. Shall we embrace deep learning as the key to all? Or, should we resist a 'black-box' solution? There are controversial opinions in the remote sensing community. In this article, we analyze the challenges of using deep learning for remote sensing data analysis, review the recent advances, and provide resources to make deep learning in remote sensing ridiculously simple to start with. More importantly, we advocate remote sensing scientists to bring their expertise into deep learning, and use it as an implicit general model to tackle unprecedented large-scale influential challenges, such as climate change and urbanization.Comment: Accepted for publication IEEE Geoscience and Remote Sensing Magazin

    Técnicas de análise de imagens para detecção de retinopatia diabética

    Get PDF
    Orientadores: Anderson de Rezende Rocha. Jacques WainerTese (doutorado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Retinopatia Diabética (RD) é uma complicação a longo prazo do diabetes e a principal causa de cegueira da população ativa. Consultas regulares são necessárias para diagnosticar a retinopatia em um estágio inicial, permitindo um tratamento com o melhor prognóstico capaz de retardar ou até mesmo impedir a cegueira. Alavancados pela evolução da prevalência do diabetes e pelo maior risco que os diabéticos têm de desenvolver doenças nos olhos, diversos trabalhos com abordagens bem estabelecidas e promissoras vêm sendo desenvolvidos para triagem automática de retinopatia. Entretanto, a maior parte dos trabalhos está focada na detecção de lesões utilizando características visuais particulares de cada tipo de lesão. Além do mais, soluções artesanais para avaliação de necessidade de consulta e de identificação de estágios da retinopatia ainda dependem bastante das lesões, cujo repetitivo procedimento de detecção é complexo e inconveniente, mesmo se um esquema unificado for adotado. O estado da arte para avaliação automatizada de necessidade de consulta é composto por abordagens que propõem uma representação altamente abstrata obtida inteiramente por meio dos dados. Usualmente, estas abordagens recebem uma imagem e produzem uma resposta ¿ que pode ser resultante de um único modelo ou de uma combinação ¿ e não são facilmente explicáveis. Este trabalho objetivou melhorar a detecção de lesões e reforçar decisões relacionadas à necessidade de consulta, fazendo uso de avançadas representações de imagens em duas etapas. Nós também almejamos compor um modelo sofisticado e direcionado pelos dados para triagem de retinopatia, bem como incorporar aprendizado supervisionado de características com representação orientada por mapa de calor, resultando em uma abordagem robusta e ainda responsável para triagem automatizada. Finalmente, tivemos como objetivo a integração das soluções em dispositivos portáteis de captura de imagens de retina. Para detecção de lesões, propusemos abordagens de caracterização de imagens que possibilitem uma detecção eficaz de diferentes tipos de lesões. Nossos principais avanços estão centrados na modelagem de uma nova técnica de codificação para imagens de retina, bem como na preservação de informações no processo de pooling ou agregação das características obtidas. Decidir automaticamente pela necessidade de encaminhamento do paciente a um especialista é uma investigação ainda mais difícil e muito debatida. Nós criamos um método mais simples e robusto para decisões de necessidade de consulta, e que não depende da detecção de lesões. Também propusemos um modelo direcionado pelos dados que melhora significativamente o desempenho na tarefa de triagem da RD. O modelo produz uma resposta confiável com base em respostas (locais e globais), bem como um mapa de ativação que permite uma compreensão de importância de cada pixel para a decisão. Exploramos a metodologia de explicabilidade para criar um descritor local codificado em uma rica representação em nível médio. Os modelos direcionados pelos dados são o estado da arte para triagem de retinopatia diabética. Entretanto, mapas de ativação são essenciais para interpretar o aprendizado em termos de importância de cada pixel e para reforçar pequenas características discriminativas que têm potencial de melhorar o diagnósticoAbstract: Diabetic Retinopathy (DR) is a long-term complication of diabetes and the leading cause of blindness among working-age adults. A regular eye examination is necessary to diagnose DR at an early stage, when it can be treated with the best prognosis and the visual loss delayed or deferred. Leveraged by the continuous expansion of diabetics and by the increased risk that those people have to develop eye diseases, several works with well-established and promising approaches have been proposed for automatic screening. Therefore, most existing art focuses on lesion detection using visual characteristics specific to each type of lesion. Additionally, handcrafted solutions for referable diabetic retinopathy detection and DR stages identification still depend too much on the lesions, whose repetitive detection is complex and cumbersome to implement, even when adopting a unified detection scheme. Current art for automated referral assessment resides on highly abstract data-driven approaches. Usually, those approaches receive an image and spit the response out ¿ that might be resulting from only one model or ensembles ¿ and are not easily explainable. Hence, this work aims at enhancing lesion detection and reinforcing referral decisions with advanced handcrafted two-tiered image representations. We also intended to compose sophisticated data-driven models for referable DR detection and incorporate supervised learning of features with saliency-oriented mid-level image representations to come up with a robust yet accountable automated screening approach. Ultimately, we aimed at integrating our software solutions with simple retinal imaging devices. In the lesion detection task, we proposed advanced handcrafted image characterization approaches to detecting effectively different lesions. Our leading advances are centered on designing a novel coding technique for retinal images and preserving information in the pooling process. Automatically deciding on whether or not the patient should be referred to the ophthalmic specialist is a more difficult, and still hotly debated research aim. We designed a simple and robust method for referral decisions that does not rely upon lesion detection stages. We also proposed a novel and effective data-driven model that significantly improves the performance for DR screening. Our accountable data-driven model produces a reliable (local- and global-) response along with a heatmap/saliency map that enables pixel-based importance comprehension. We explored this methodology to create a local descriptor that is encoded into a rich mid-level representation. Data-driven methods are the state of the art for diabetic retinopathy screening. However, saliency maps are essential not only to interpret the learning in terms of pixel importance but also to reinforce small discriminative characteristics that have the potential to enhance the diagnosticDoutoradoCiência da ComputaçãoDoutor em Ciência da ComputaçãoCAPE
    corecore