2,087 research outputs found

    Quantitative Analysis of Tissue Clearing Based on Optical Coherence Tomography and Magnetic Resonance Imaging

    Get PDF
    Department of Biomedical EngineeringIn the past decades, many optical imaging modalities have played a key role to understand how neurons connect and mediate their function. Especially, deep brain imaging has been crucial in neural anatomy research by providing brain-wide structural information. Although the optical imaging renders the high resolution brain image, it has restriction to perform the deep brain imaging due to inherent scattering problem of light. To enhance the imaging depth, many optical imaging modalities have combined with serial sectioning. As name suggests, serial sectioning solves the penetration depth problem by successively sectioning the tissue and imaging the remained tissue. Although serial sectioning techniques enable us to visualize whole brain, these techniques still have remained challenging in terms of labor intensive technique as well as tissue damages due to physical sectioning. Therefore, it is very demanding the new approach for whole brain imaging while preserving the intact brain. In recent years, development of tissue clearing which renders biological sample transparent proposes a solution to solve the penetration depth issue. It reduces the problematic light scattering and thus extends the limited penetration depth by either matching the refractive index or removing the lipid. As mentioned above, many researchers have developed various optical clearing agents such as Scale, 3DISCO, SeeDB, CLARITY, and ClearT. Scale and CLARITY increase the imaging depth by removing the lipid which is scattering factor, whereas 3DISCO, SeeDB, and ClearT increase the imaging depth through the index matching. Because scattering is proportional to refractive index gap, index matching reduces the scattering. These clearing techniques open up the possibility of the deep brain imaging. With the help of this modern pioneering tissue clearing technique, fluorescence microscopy including confocal microscopy (CM), multi-photon microscopy (MPM), and single plane illumination microscopy (SPIM) now enables us to image brain much deeper than ever before. Although efforts to eliminate the problematic light scattering have been ongoing for past decades, previous research has rarely reported quantification of enhancement light penetration into the cleared brain. They have only focused on the capability of three-dimensional visualizationA few quantification studies end up in measurement of transmittance or depth profile. Limitation of these studies was not able to provide the analysis of tissue property change induced by tissue clearing and to compare the tissue clearing characteristics. That is, there have not been standardized techniques to measure the clearing efficiency of regional differences and to investigate the principle of various tissue clearing methods, despite its significant need for reliability and reproducibility. Here, we present optical coherence tomography (OCT) and magnetic resonance imaging (MRI) to quantitatively assess the tissue clearing technique. OCT can perform label-free, non-invasive optical imaging by using Michelson interferometer. Thanks to these strong characteristics, OCT is appropriate tool to validate increase of imaging depth through the analysis of A-line profile. Therefore, we quantitatively measured the effect of diverse clearing even each brain region by using OCT. On the other hands, MRI is also non-invasive imaging technique based on nuclear magnetic resonance (NMR). Because MRI signal is based on atomic characteristics, we can physically investigate the fundamental principle of tissue clearing by monitoring the tissue atomic properties change. Through this study, we can investigate the diverse tissue clearing characteristics and compare the existing clearing technique. Furthermore, we provide the standard to evaluate the various tissue clearing and it allows the choice of proper tissue clearing for experimental purpose. Therefore, this study is able to increase the reliability and reproducibility of experimental results.ope

    Prosodically-conditioned factive inferences in Korean: An experimental study

    Get PDF
    Much work has been done to capture the systematic variation in projective behaviors of factive inferences. However, not as much has been said about variation in factivity below the level of projection. This paper presents an experimental study which probes and confirms that Korean showcases patterns of variation in this vein: Factive inferences systematically arise from certain V+CPs, but only under certain prosody, even in unembedded contexts. Characterizing this pattern as 'prosodically-conditioned factive inferences', the paper proposes a pragmatic analysis of it couched in alternative semantics. The analysis motivates and defends a new interpretive principle which governs how alternatives come into contrast with each other, and re-examines the ontological status of factivity

    Synergizing Roughness Penalization and Basis Selection in Bayesian Spline Regression

    Full text link
    Bayesian P-splines and basis determination through Bayesian model selection are both commonly employed strategies for nonparametric regression using spline basis expansions within the Bayesian framework. Despite their widespread use, each method has particular limitations that may introduce potential estimation bias depending on the nature of the target function. To overcome the limitations associated with each method while capitalizing on their respective strengths, we propose a new prior distribution that integrates the essentials of both approaches. The proposed prior distribution assesses the complexity of the spline model based on a penalty term formed by a convex combination of the penalties from both methods. The proposed method exhibits adaptability to the unknown level of smoothness, while achieving the minimax-optimal posterior contraction rate up to a logarithmic factor. We provide an efficient Markov chain Monte Carlo algorithm for implementing the proposed approach. Our extensive simulation study reveals that the proposed method outperforms other competitors in terms of performance metrics or model complexity

    Identity construction through gendered terms of addresses in Korean

    Get PDF
    How should a speaker call a hearer? In this paper, we present an experimental study which probes the social and interactional meaning of Korean gendered terms of addresses (GTAs: unnie, oppa, noona, hyung). GTAs prescriptively index genders of both interlocutors, but are beginning to be used in ‘gender- mismatch’ patterns. Based on the experimental results, we argue that both the prescription conforming and the ‘mismatching’ uses of GTAs are each associated with unique, complex webs of meanings which track the shifting gender ideologies in Korea. In particular, mismatching uses of GTAs are shown to often function as speakers’ strategy to break away from established gender norms, including traditional gender roles and the sexualization of female-male relations

    Multiple Model Poisson Multi-Bernoulli Mixture for 5G Mapping

    Get PDF
    In this paper, we evaluate and compare the multiple model Poisson multi-Bernoulli mixture (MM-PMBM) and the multiple model probability hypothesis density (MM-PHD) filters for mapping a propagation environment, specified by multiple types objects, using 5G millimeter-wave signals. To develop the MM-PMBM applicable to 5G scenarios, we design the density representation, data structure, and implementation strategy. From the simulation results, it is demonstrated that the MM-PMBM captures the objects and is robust to both missed detections and false alarm compared to the MM-PHD
    corecore