208,455 research outputs found

    Gene expression analysis in microdissected renal tissue - Current challenges and strategies

    Get PDF
    The architecture and compartmentalization of the kidney has stimulated the development of an array of microtechniques to study the functional differences between the distinct nephron segments. With the vast amounts of genomic sequence data now available, the groundwork has been laid for a comprehensive characterization of the molecular pathways defining the differences in nephron function. With the development of sensitive gene expression techniques the tools for a comprehensive molecular analysis of specific renal microenvironments have been provided: Quantitative RT-PCR technologies now allow the analysis of specific mRNAs from as little as single microdissected renal cells. A more global view of gene expression regulation is a logical development from the application of large scale profiling techniques. In this review, we will discuss the power and pitfalls of these approaches, including their potential for the functional characterization of nephron heterogeneity and diagnostic application in renal disease. Copyright (C) 2002 S. Karger AG, Basel

    Content analysis: What are they talking about?

    Get PDF
    Quantitative content analysis is increasingly used to surpass surface level analyses in Computer-Supported Collaborative Learning (e.g., counting messages), but critical reflection on accepted practice has generally not been reported. A review of CSCL conference proceedings revealed a general vagueness in definitions of units of analysis. In general, arguments for choosing a unit were lacking and decisions made while developing the content analysis procedures were not made explicit. In this article, it will be illustrated that the currently accepted practices concerning the ‘unit of meaning’ are not generally applicable to quantitative content analysis of electronic communication. Such analysis is affected by ‘unit boundary overlap’ and contextual constraints having to do with the technology used. The analysis of e-mail communication required a different unit of analysis and segmentation procedure. This procedure proved to be reliable, and the subsequent coding of these units for quantitative analysis yielded satisfactory reliabilities. These findings have implications and recommendations for current content analysis practice in CSCL research

    Dialogue as Data in Learning Analytics for Productive Educational Dialogue

    Get PDF
    This paper provides a novel, conceptually driven stance on the state of the contemporary analytic challenges faced in the treatment of dialogue as a form of data across on- and offline sites of learning. In prior research, preliminary steps have been taken to detect occurrences of such dialogue using automated analysis techniques. Such advances have the potential to foster effective dialogue using learning analytic techniques that scaffold, give feedback on, and provide pedagogic contexts promoting such dialogue. However, the translation of much prior learning science research to online contexts is complex, requiring the operationalization of constructs theorized in different contexts (often face-to-face), and based on different datasets and structures (often spoken dialogue). In this paper, we explore what could constitute the effective analysis of productive online dialogues, arguing that it requires consideration of three key facets of the dialogue: features indicative of productive dialogue; the unit of segmentation; and the interplay of features and segmentation with the temporal underpinning of learning contexts. The paper thus foregrounds key considerations regarding the analysis of dialogue data in emerging learning analytics environments, both for learning-science and for computationally oriented researchers

    Computing the Affective-Aesthetic Potential of Literary Texts

    Get PDF
    In this paper, we compute the affective-aesthetic potential (AAP) of literary texts by using a simple sentiment analysis tool called SentiArt. In contrast to other established tools, SentiArt is based on publicly available vector space models (VSMs) and requires no emotional dictionary, thus making it applicable in any language for which VSMs have been made available (>150 so far) and avoiding issues of low coverage. In a first study, the AAP values of all words of a widely used lexical databank for German were computed and the VSM’s ability in representing concrete and more abstract semantic concepts was demonstrated. In a second study, SentiArt was used to predict ~2800 human word valence ratings and shown to have a high predictive accuracy (R2 > 0.5, p < 0.0001). A third study tested the validity of SentiArt in predicting emotional states over (narrative) time using human liking ratings from reading a story. Again, the predictive accuracy was highly significant: R2adj = 0.46, p < 0.0001, establishing the SentiArt tool as a promising candidate for lexical sentiment analyses at both the micro- and macrolevels, i.e., short and long literary materials. Possibilities and limitations of lexical VSM-based sentiment analyses of diverse complex literary texts are discussed in the light of these results
    corecore