69,463 research outputs found

    Analyzing collaborative learning processes automatically

    Get PDF
    In this article we describe the emerging area of text classification research focused on the problem of collaborative learning process analysis both from a broad perspective and more specifically in terms of a publicly available tool set called TagHelper tools. Analyzing the variety of pedagogically valuable facets of learners’ interactions is a time consuming and effortful process. Improving automated analyses of such highly valued processes of collaborative learning by adapting and applying recent text classification technologies would make it a less arduous task to obtain insights from corpus data. This endeavor also holds the potential for enabling substantially improved on-line instruction both by providing teachers and facilitators with reports about the groups they are moderating and by triggering context sensitive collaborative learning support on an as-needed basis. In this article, we report on an interdisciplinary research project, which has been investigating the effectiveness of applying text classification technology to a large CSCL corpus that has been analyzed by human coders using a theory-based multidimensional coding scheme. We report promising results and include an in-depth discussion of important issues such as reliability, validity, and efficiency that should be considered when deciding on the appropriateness of adopting a new technology such as TagHelper tools. One major technical contribution of this work is a demonstration that an important piece of the work towards making text classification technology effective for this purpose is designing and building linguistic pattern detectors, otherwise known as features, that can be extracted reliably from texts and that have high predictive power for the categories of discourse actions that the CSCL community is interested in

    Loss-resilient Coding of Texture and Depth for Free-viewpoint Video Conferencing

    Full text link
    Free-viewpoint video conferencing allows a participant to observe the remote 3D scene from any freely chosen viewpoint. An intermediate virtual viewpoint image is commonly synthesized using two pairs of transmitted texture and depth maps from two neighboring captured viewpoints via depth-image-based rendering (DIBR). To maintain high quality of synthesized images, it is imperative to contain the adverse effects of network packet losses that may arise during texture and depth video transmission. Towards this end, we develop an integrated approach that exploits the representation redundancy inherent in the multiple streamed videos a voxel in the 3D scene visible to two captured views is sampled and coded twice in the two views. In particular, at the receiver we first develop an error concealment strategy that adaptively blends corresponding pixels in the two captured views during DIBR, so that pixels from the more reliable transmitted view are weighted more heavily. We then couple it with a sender-side optimization of reference picture selection (RPS) during real-time video coding, so that blocks containing samples of voxels that are visible in both views are more error-resiliently coded in one view only, given adaptive blending will erase errors in the other view. Further, synthesized view distortion sensitivities to texture versus depth errors are analyzed, so that relative importance of texture and depth code blocks can be computed for system-wide RPS optimization. Experimental results show that the proposed scheme can outperform the use of a traditional feedback channel by up to 0.82 dB on average at 8% packet loss rate, and by as much as 3 dB for particular frames

    Extending AMCW lidar depth-of-field using a coded aperture

    Get PDF
    By augmenting a high resolution full-field Amplitude Modulated Continuous Wave lidar system with a coded aperture, we show that depth-of-field can be extended using explicit, albeit blurred, range data to determine PSF scale. Because complex domain range-images contain explicit range information, the aperture design is unconstrained by the necessity for range determination by depth-from-defocus. The coded aperture design is shown to improve restoration quality over a circular aperture. A proof-of-concept algorithm using dynamic PSF determination and spatially variant Landweber iterations is developed and using an empirically sampled point spread function is shown to work in cases without serious multipath interference or high phase complexity

    Learning Wavefront Coding for Extended Depth of Field Imaging

    Get PDF
    Depth of field is an important factor of imaging systems that highly affects the quality of the acquired spatial information. Extended depth of field (EDoF) imaging is a challenging ill-posed problem and has been extensively addressed in the literature. We propose a computational imaging approach for EDoF, where we employ wavefront coding via a diffractive optical element (DOE) and we achieve deblurring through a convolutional neural network. Thanks to the end-to-end differentiable modeling of optical image formation and computational post-processing, we jointly optimize the optical design, i.e., DOE, and the deblurring through standard gradient descent methods. Based on the properties of the underlying refractive lens and the desired EDoF range, we provide an analytical expression for the search space of the DOE, which is instrumental in the convergence of the end-to-end network. We achieve superior EDoF imaging performance compared to the state of the art, where we demonstrate results with minimal artifacts in various scenarios, including deep 3D scenes and broadband imaging

    Cortical Computation of Stereo Disparity

    Full text link
    Our ability to see the world in depth is a major accomplishment of the brain. Previous models of how positionally disparate cues to the two eyes are binocularly matched limit possible matches by invoking uniqueness and continuity constraints. These approaches cannot explain data wherein uniqueness fails and changes in contrast alter depth percepts, or where surface discontinuities cause surfaces to be seen in depth although they are registered by only one eye (da Vinci stereopsis). A new stereopsis model explains these depth percepts by proposing how cortical complex cells binocularly filter their inputs and how monocular and binocular complex cells compete to determine the winning depth signals.Defense Advanced Research Projects Agency (N00014-92-J-4015); Air Force Office of Scientific Research (90-0175); Office of Naval Research (N00014-91-J-4100); James S. McDonnell Foundation (94-40); Defense Advanced Research Projects Agency and the Office of Naval Research (N00014-95-1-0409, N00014-95-1-0657

    The grounded theory alternative in business network research

    Get PDF
    This paper presents a brief outline of the defining characteristics of grounded theory methodology. Such a focus was motivated by a desire to bring the methodology into clearer focus. Particular attention is paid to the debate grounded theory has engendered. In doing so, a number of misunderstandings, dilemmas and criticisms are highlighted. Thus, while one research strategy should not be emphasised to the exclusion of others, this paper advocates the use of grounded theory methodology as a fresh approach in addressing some of the research challenges associated with network studies

    Coalescent Assimilation Across Wordboundaries in American English and in Polish English

    Get PDF
    Coalescent assimilation (CA), where alveolar obstruents /t, d, s, z/ in word-final position merge with word-initial /j/ to produce postalveolar /tʃ, dʒ, ʃ, ʒ/, is one of the most wellknown connected speech processes in English. Due to its commonness, CA has been discussed in numerous textbook descriptions of English pronunciation, and yet, upon comparing them it is difficult to get a clear picture of what factors make its application likely. This paper aims to investigate the application of CA in American English to see a) what factors increase the likelihood of its application for each of the four alveolar obstruents, and b) what is the allophonic realization of plosives /t, d/ if the CA does not apply. To do so, the Buckeye Corpus (Pitt et al. 2007) of spoken American English is analyzed quantitatively. As a second step, these results are compared with Polish English; statistics analogous to the ones listed above for American English are gathered for Polish English based on the PLEC corpus (Pęzik 2012). The last section focuses on what consequences for teaching based on a native speaker model the findings have. It is argued that a description of the phenomenon that reflects the behavior of speakers of American English more accurately than extant textbook accounts could be beneficial to the acquisition of these patterns

    A neural model of border-ownership from kinetic occlusion

    Full text link
    Camouflaged animals that have very similar textures to their surroundings are difficult to detect when stationary. However, when an animal moves, humans readily see a figure at a different depth than the background. How do humans perceive a figure breaking camouflage, even though the texture of the figure and its background may be statistically identical in luminance? We present a model that demonstrates how the primate visual system performs figure–ground segregation in extreme cases of breaking camouflage based on motion alone. Border-ownership signals develop as an emergent property in model V2 units whose receptive fields are nearby kinetically defined borders that separate the figure and background. Model simulations support border-ownership as a general mechanism by which the visual system performs figure–ground segregation, despite whether figure–ground boundaries are defined by luminance or motion contrast. The gradient of motion- and luminance-related border-ownership signals explains the perceived depth ordering of the foreground and background surfaces. Our model predicts that V2 neurons, which are sensitive to kinetic edges, are selective to border-ownership (magnocellular B cells). A distinct population of model V2 neurons is selective to border-ownership in figures defined by luminance contrast (parvocellular B cells). B cells in model V2 receive feedback from neurons in V4 and MT with larger receptive fields to bias border-ownership signals toward the figure. We predict that neurons in V4 and MT sensitive to kinetically defined figures play a crucial role in determining whether the foreground surface accretes, deletes, or produces a shearing motion with respect to the background.This work was supported in part by CELEST (NSF SBE-0354378 and OMA-0835976), the Office of Naval Research (ONR N00014-11-1-0535) and Air Force Office of Scientific Research (AFOSR FA9550-12-1-0436). (NSF SBE-0354378 - CELEST; OMA-0835976 - CELEST; ONR N00014-11-1-0535 - Office of Naval Research; AFOSR FA9550-12-1-0436 - Air Force Office of Scientific Research)Published versio
    corecore