1,455 research outputs found

    Secretory vesicles are preferentially targeted to areas of low molecular SNARE density

    Get PDF
    Intercellular communication is commonly mediated by the regulated fusion, or exocytosis, of vesicles with the cell surface. SNARE (soluble N-ethymaleimide sensitive factor attachment protein receptor) proteins are the catalytic core of the secretory machinery, driving vesicle and plasma membrane merger. Plasma membrane SNAREs (tSNAREs) are proposed to reside in dense clusters containing many molecules, thus providing a concentrated reservoir to promote membrane fusion. However, biophysical experiments suggest that a small number of SNAREs are sufficient to drive a single fusion event. Here we show, using molecular imaging, that the majority of tSNARE molecules are spatially separated from secretory vesicles. Furthermore, the motilities of the individual tSNAREs are constrained in membrane micro-domains, maintaining a non-random molecular distribution and limiting the maximum number of molecules encountered by secretory vesicles. Together our results provide a new model for the molecular mechanism of regulated exocytosis and demonstrate the exquisite organization of the plasma membrane at the level of individual molecular machines

    A general strategy to develop fluorogenic polymethine dyes for bioimaging

    Get PDF
    Fluorescence imaging is an invaluable tool to study biological processes and further progress depends on the development of advanced fluorogenic probes that reach intracellular targets and label them with high specificity. Excellent fluorogenic rhodamine dyes have been reported, but they often require long and low-yielding syntheses, and are spectrally limited to the visible range. Here we present a general strategy to transform polymethine compounds into fluorogenic dyes using an intramolecular ring-closure approach. We illustrate the generality of this method by creating both spontaneously blinking and no-wash, turn-on polymethine dyes with emissions across the visible and near-infrared spectrum. These probes are compatible with self-labelling proteins and small-molecule targeting ligands, and can be combined with rhodamine-based dyes for multicolour and fluorescence lifetime multiplexing imaging. This strategy provides access to bright, fluorogenic dyes that emit at wavelengths that are more red-shifted compared with those of existing rhodamine-based dyes

    A Survey of Deep Learning-Based Object Detection

    Get PDF
    Object detection is one of the most important and challenging branches of computer vision, which has been widely applied in peoples life, such as monitoring security, autonomous driving and so on, with the purpose of locating instances of semantic objects of a certain class. With the rapid development of deep learning networks for detection tasks, the performance of object detectors has been greatly improved. In order to understand the main development status of object detection pipeline, thoroughly and deeply, in this survey, we first analyze the methods of existing typical detection models and describe the benchmark datasets. Afterwards and primarily, we provide a comprehensive overview of a variety of object detection methods in a systematic manner, covering the one-stage and two-stage detectors. Moreover, we list the traditional and new applications. Some representative branches of object detection are analyzed as well. Finally, we discuss the architecture of exploiting these object detection methods to build an effective and efficient system and point out a set of development trends to better follow the state-of-the-art algorithms and further research.Comment: 30 pages,12 figure

    Appearance and Geometry Assisted Visual Navigation in Urban Areas

    Get PDF
    Navigation is a fundamental task for mobile robots in applications such as exploration, surveillance, and search and rescue. The task involves solving the simultaneous localization and mapping (SLAM) problem, where a map of the environment is constructed. In order for this map to be useful for a given application, a suitable scene representation needs to be defined that allows spatial information sharing between robots and also between humans and robots. High-level scene representations have the benefit of being more robust and having higher exchangeability for interpretation. With the aim of higher level scene representation, in this work we explore high-level landmarks and their usage using geometric and appearance information to assist mobile robot navigation in urban areas. In visual SLAM, image registration is a key problem. While feature-based methods such as scale-invariant feature transform (SIFT) matching are popular, they do not utilize appearance information as a whole and will suffer from low-resolution images. We study appearance-based methods and propose a scale-space integrated Lucas-Kanadeโ€™s method that can estimate geometric transformations and also take into account image appearance with different resolutions. We compare our method against state-of-the-art methods and show that our method can register images efficiently with high accuracy. In urban areas, planar building facades (PBFs) are basic components of the quasirectilinear environment. Hence, segmentation and mapping of PBFs can increase a robotโ€™s abilities of scene understanding and localization. We propose a vision-based PBF segmentation and mapping technique that combines both appearance and geometric constraints to segment out planar regions. Then, geometric constraints such as reprojection errors, orientation constraints, and coplanarity constraints are used in an optimization process to improve the mapping of PBFs. A major issue in monocular visual SLAM is scale drift. While depth sensors, such as lidar, are free from scale drift, this type of sensors are usually more expensive compared to cameras. To enable low-cost mobile robots equipped with monocular cameras to obtain accurate position information, we use a 2D lidar map to rectify imprecise visual SLAM results using planar structures. We propose a two-step optimization approach assisted by a penalty function to improve on low-quality local minima results. Robot paths for navigation can be either automatically generated by a motion planning algorithm or provided by a human. In both cases, a scene representation of the environment, i.e., a map, is useful to specify meaningful tasks for the robot. However, SLAM results usually produce a sparse scene representation that consists of low-level landmarks, such as point clouds, which are neither convenient nor intuitive to use for task specification. We present a system that allows users to program mobile robots using high-level landmarks from appearance data

    Localization-based fluorescence correlation spectroscopy

    Get PDF

    Light Microscopy Combined with Computational Image Analysis Uncovers Virus-Specific Infection Phenotypes and Host Cell State Variability

    Get PDF
    Abstract: The study of virus infection phenotypes and variability plays a critical role in understanding viral pathogenesis and host response. Virus-host interactions can be investigated by light and various label-free microscopy methods, which provide a powerful tool for the spatiotemporal analysis of patterns at the cellular and subcellular levels in live or fixed cells. Analysis of microscopy images is increasingly complemented by sophisticated statistical methods and leverages artificial intelligence (AI) to address the tasks of image denoising, segmentation, classification, and tracking. Work in this thesis demonstrates that combining microscopy with AI techniques enables models that accurately detect and quantify viral infection due to the virus-induced cytopathic effect (CPE). Furthermore, it shows that statistical analysis of microscopy image data can disentangle stochastic and deterministic factors that contribute to viral infection variability, such as the cellular state. In summary, the integration of microscopy and computational image analysis offers a powerful and flexible approach for studying virus infection phenotypes and variability, ultimately contributing to a more advanced understanding of infection processes and creating possibilities for the development of more effective antiviral strategies

    ๊ฐ์ฒด๊ฒ€์ถœ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์œ„ํ•œ ์ผ๊ด€์„ฑ๊ณผ ๋ณด๊ฐ„๋ฒ• ๊ธฐ๋ฐ˜์˜ ์ค€์ง€๋„ ํ•™์Šต

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ์œตํ•ฉ๊ณผํ•™๊ธฐ์ˆ ๋Œ€ํ•™์› ์œตํ•ฉ๊ณผํ•™๋ถ€(์ง€๋Šฅํ˜•์œตํ•ฉ์‹œ์Šคํ…œ์ „๊ณต), 2021.8. ๊ณฝ๋…ธ์ค€.Object detection, one of the main areas of computer vision researches, is a task that predicts where and what the objects are in an RGB image. While the object detection task requires a massive number of annotated samples to guarantee its performance, placing bounding boxes for every object in each sample is costly and time consuming. To alleviate this problem, Weakly-Supervised Learning and Semi-Supervised Learning methods have been proposed. However, they show large gaps from supervised learning in efficiency and require a lot of research. Especially in Semi-Supervised Learning, the deep learning-based learning methods are not yet applied to object detection. In this dissertation, we have applied the latest deep learning-based Semi-Supervised Learning methods to object detection, which considers and solves the problems caused by applying the established Semi-Supervised Learning algorithms. Specifically, we have adopted Consistency Regularization (CR) and Interpolation Regularization (IR) Semi-Supervised Learning methods to object detection individually and combined them together for performance improvement. It is the first attempt to extend CR and IR to object detection problem which was only used in conventional semi-supervised classification problems First, we propose a novel Consistency-based Semi-Supervised Learning method for object Detection (CSD), which is a way of using consistency constraints to enhance detection performance by making full use of available unlabeled data. To be specific, the consistency constraint is applied not only for object classification but also for localization. We also propose Background Elimination (BE) to avoid the negative effect of the predominant backgrounds on the detection performance. We evaluated the proposed CSD both in single-stage and two-stage detectors, and the results show the effectiveness of our method. Second, we present a novel Interpolation-based Semi-Supervised Learning method for object Detection (ISD), which considers and solves the problems caused by applying conventional Interpolation Regularization (IR) directly to object detection. We divide the output of the model into two types according to the objectness scores of both original patches that are mixed in IR. Then, we apply a separate loss suitable for each type in an unsupervised manner. The proposed losses dramatically improve the performance of Semi-Supervised Learning as well as supervised learning. Third, we introduce the method of combining CSD and ISD. In CSD, it requires an additional prediction for applying consistency regularization, and it allocates twice (x2) as much memory as conventional supervised learning. In ISD, in addition, two supplementary predictions are computed for applying interpolation regularization, and it takes three times (x3) as much memory as conventional training. Therefore, it requires three extra predictions to combine CSD and ISD. In our method, by applying shuffle the sample in mini-batch in CSD, we reduced the additional predictions from three to two, which can cut back the memory. Furthermore, combining two algorithms shows performance improvement.๊ฐ์ฒด ๊ฒ€์ถœ ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ RGB ์ด๋ฏธ์ง€์—์„œ ์–ด๋Š ์œ„์น˜์— ์–ด๋–ค ๊ฐ์ฒด๊ฐ€ ์žˆ๋Š”์ง€๋ฅผ ๊ฒ€์ถœํ•˜๋Š” ๊ฒƒ์œผ๋กœ, ์ปดํ“จํ„ฐ ๋น„์ „ ๋ถ„์•ผ์—์„œ ๊ฐ€์žฅ ์ค‘์š”ํ•œ ์—ฐ๊ตฌ๋ถ„์•ผ ์ค‘ ํ•˜๋‚˜์ด๋‹ค. ํ•˜์ง€๋งŒ, ์ด๋Ÿฌํ•œ ๊ฐ์ฒด ๊ฒ€์ถœ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์œ„ํ•ด์„œ๋Š” ์ž˜ ๋ ˆ์ด๋ธ”๋ง๋œ ํฐ ๋ฐ์ดํ„ฐ ์…‹์„ ํ•„์š”๋กœ ํ•˜๊ณ , ์ด๋Ÿฌํ•œ ๋ ˆ์ด๋ธ”๋ง์€ ๋งค์šฐ ๋งŽ์€ ๋น„์šฉ๊ณผ ์‹œ๊ฐ„์„ ํ•„์š”๋กœ ํ•œ๋‹ค. ์œ„์™€ ๊ฐ™์€ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•˜์—ฌ ์•ฝํ•œ ์ง€๋„ํ•™์Šต (Weakly Supervised Learning), ์ค€์ง€๋„ ํ•™์Šต (Semi Supervised Learning)์˜ ๋ฐฉ๋ฒ•๋“ค์ด ์—ฐ๊ตฌ๋˜๊ณ  ์žˆ์œผ๋‚˜, ๊ทธ ์—ฐ๊ตฌ๊ฐ€ ๋งŽ์ง€ ์•Š๊ณ , ์ค€์ง€๋„ ํ•™์Šต์˜ ๊ฒฝ์šฐ, ์ตœ์‹  ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜์˜ ํ•™์Šต๋ฐฉ๋ฒ•๋“ค์ด ์ ์šฉ๋˜์ง€ ์•Š๊ณ  ์žˆ์—ˆ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์—์„œ๋Š” ์ตœ์‹ ์˜ ๋”ฅ๋Ÿฌ๋‹ ๊ธฐ๋ฐ˜์˜ ์ค€์ง€๋„ ํ•™์Šต ๋ฐฉ๋ฒ•๋“ค์„ ๊ฐ์ฒด ๊ฒ€์ถœ ์•Œ๊ณ ๋ฆฌ์ฆ˜์— ์ ์šฉํ•˜์˜€๊ณ , ์—ฌ๊ธฐ์„œ ๋ฐœ์ƒํ•˜๋Š” ๋ฌธ์ œ๋“ค์„ ๋ฐœ๊ฒฌํ•˜๊ณ  ํ•ด๊ฒฐํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์—ฐ๊ตฌํ•˜์˜€๋‹ค. ๊ตฌ์ฒด์ ์œผ๋กœ ์ผ๊ด€์„ฑ ์ •๊ทœํ™” (Consistency Regularization), ๋ณด๊ฐ„๋ฒ• ์ •๊ทœํ™” (Interpolation Regularization) ๊ธฐ๋ฐ˜์˜ ์ค€์ง€๋„ ํ•™์Šต ๋ฐฉ๋ฒ•์„ ์ œ์‹œํ•˜์˜€๊ณ , ์ตœ์ข…์ ์œผ๋กœ ์ด ๋‘˜์„ ํ•ฉ์น˜๋Š” ๋ฐฉ๋ฒ•์„ ์ œ์‹œํ•˜์˜€๋‹ค. ์ด๋Š” ๊ธฐ์กด์˜ ๋ถ„๋ฅ˜๋ฌธ์ œ์—์„œ ์‚ฌ์šฉ๋˜๋Š” CR ๊ณผ IR ์„ ๊ฐ์ฒด๊ฒ€์ถœ ์•Œ๊ณ ๋ฆฌ์ฆ˜ ๋ฌธ์ œ์— ์ฒ˜์Œ์œผ๋กœ ํ™•์žฅํ•œ ๊ฒƒ์ด๋‹ค. ์ฒซ ๋ฒˆ์งธ๋กœ, ์šฐ๋ฆฌ๋Š” ๊ฐ์ฒด ๊ฒ€์ถœ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์œ„ํ•œ ์ผ๊ด€์„ฑ ์ •๊ทœํ™” ๊ธฐ๋ฐ˜์˜ ์ค€์ง€๋„ ํ•™์Šต๋ฐฉ๋ฒ• (CSD)์„ ์ œ์•ˆํ•˜์˜€๋‹ค. ์ด๋Š” ์ •๊ทœํ™” ์ œ์•ฝ์„ ์‚ฌ์šฉํ•˜์—ฌ ๋ ˆ์ด๋ธ”๋ง์ด ์—†๋Š” ๋ชจ๋“  ๋ฐ์ดํ„ฐ๋ฅผ ํ™œ์šฉํ•˜์—ฌ ๊ฐ์ฒด ๊ฒ€์ถœ ์„ฑ๋Šฅ์„ ํ–ฅ์ƒ์‹œํ‚ค๋Š” ๋ฐฉ๋ฒ•์ด๋‹ค. ๊ตฌ์ฒด์ ์œผ๋กœ ์šฐ๋ฆฌ๋Š” ์ •๊ทœํ™” ์ œ์•ฝ์„ ๋ถ„๋ฅ˜๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ํšŒ๊ท€์— ๋Œ€ํ•ด์„œ๋„ ์ ์šฉํ•˜์˜€๋‹ค. ๊ฒŒ๋‹ค๊ฐ€, ์šฐ๋ฆฌ๋Š” ํ•œ ์ด๋ฏธ์ง€ ๋‚ด์—์„œ ๋Œ€๋ถ€๋ถ„์˜ ์˜์—ญ์„ ์ฐจ์ง€ํ•˜๋Š” ๋ฐฐ๊ฒฝ ๋ถ€๋ถ„์˜ ์˜ํ–ฅ์„ ์ค„์ด๊ธฐ ์œ„ํ•˜์—ฌ ๋ฐฐ๊ฒฝ ์ œ๊ฑฐ (Background Elimination) ์„ ์ ์šฉํ•˜์˜€๋‹ค. ์šฐ๋ฆฌ๋Š” ์ œ์•ˆํ•œ CSD ๋ฅผ ์‹ฑ๊ธ€ ๋‹จ๊ณ„ (Single-Stage)์™€ ๋‘ ๋‹จ๊ณ„(Two-Stage) ๊ฒ€์ถœ๊ธฐ์— ๋ชจ๋‘ ์ ์šฉํ•˜์—ฌ ํ‰๊ฐ€ํ•˜์˜€๊ณ , ๊ฒฐ๊ณผ๋“ค์€ ์šฐ๋ฆฌ์˜ ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ํšจ๊ณผ๋ฅผ ๋ณด์˜€๋‹ค. ๋‘๋ฒˆ์งธ๋กœ, ์šฐ๋ฆฌ๋Š” ๊ฐ์ฒด ๊ฒ€์ถœ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์œ„ํ•œ ๋ณด๊ฐ„๋ฒ• ์ •๊ทœํ™” (IR) ๊ธฐ๋ฐ˜์˜ ์ค€์ง€๋„ ํ•™์Šต๋ฐฉ๋ฒ• (ISD)์„ ์ œ์•ˆํ•˜์˜€๋‹ค. ์šฐ๋ฆฌ๋Š” ๋ณด๊ฐ„๋ฒ• ์ •๊ทœํ™”๋ฅผ ๊ฐ์ฒด ๊ฒ€์ถœ ์•Œ๊ณ ๋ฆฌ์ฆ˜์— ๋ฐ”๋กœ ์ ์šฉ์‹œ์ผฐ์„ ๋•Œ ์ƒ๊ธฐ๋Š” ๋ฌธ์ œ๋“ค์„ ๊ณ ๋ คํ•˜๊ณ  ํ•ด๊ฒฐํ•˜์˜€๋‹ค. ์šฐ๋ฆฌ๋Š” ๋‘ ์›๋ณธ ํŒจ์น˜์—์„œ์˜ ๊ฐ์ฒด ํ™•๋ฅ ์— ๋”ฐ๋ผ ๋ชจ๋ธ์˜ ์ถœ๋ ฅ์„ ๋‘๊ฐœ์˜ ํƒ€์ž…์œผ๋กœ ๋‚˜๋ˆ„์—ˆ๋‹ค. ๊ทธ๋ฆฌ๊ณ , ์šฐ๋ฆฌ๋Š” ๊ฐ๊ฐ์˜ ํƒ€์ž…์— ๋”ฐ๋ผ ๊ฐ๊ฐ์— ๋งž๋Š” ์†์‹ค ํ•จ์ˆ˜๋ฅผ ์ •์˜ํ•˜์˜€๋‹ค. ์ œ์•ˆํ•œ ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ์ง€๋„ํ•™์Šต๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ์ค€์ง€๋„ ํ•™์Šต์—์„œ๋„ ๋งค์šฐ ํฐ ์„ฑ๋Šฅํ–ฅ์ƒ์„ ๋ณด์˜€๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ, ์šฐ๋ฆฌ๋Š” ์œ„์˜ CSD ์™€ ISD ์˜ ๊ฒฝํ•ฉํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์†Œ๊ฐœํ•˜์˜€๋‹ค. CSD์—์„œ๋Š” ์ผ๊ด€์„ฑ ์ •๊ทœํ™”๋ฅผ ์ ์šฉํ•˜๊ธฐ ์œ„ํ•˜์—ฌ ํ•œ๋ฒˆ์˜ ์ถ”๊ฐ€์ ์ธ ์—ฐ์‚ฐ์„ ํ•„์š”๋กœ ํ•˜๊ณ , ์ด๋Š” ๊ธฐ์กด์˜ ์ง€๋„ํ•™์Šต์— ๋น„ํ•ด 2๋ฐฐ์˜ ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ํ•„์š”๋กœ ํ•œ๋‹ค. ISD์˜ ๊ฒฝ์šฐ, ๋ณด๊ฐ„๋ฒ• ์ •๊ทœํ™”๋ฅผ ์ ์šฉํ•˜๊ธฐ ์œ„ํ•˜์—ฌ ๋‘๋ฒˆ์˜ ์ถ”๊ฐ€์ ์ธ ์—ฐ์‚ฐ์„ ํ•„์š”๋กœ ํ•˜๊ณ , ์ด๋Š” 3๋ฐฐ์˜ ๋ฉ”๋ชจ๋ฆฌ๋ฅผ ํ•„์š”๋กœ ํ•œ๋‹ค. ๊ทธ๋Ÿฌ๋ฏ€๋กœ, ๋‘ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ๊ฒฐํ•ฉํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ์„ธ๋ฒˆ์˜ ์ถ”๊ฐ€์ ์ธ ๊ฒฐ๊ณผ๊ฐ’์ด ํ•„์š”ํ•˜๋‹ค. ์šฐ๋ฆฌ๋Š” CSD ๋ฏธ๋‹ˆ๋ฐฐ์น˜์˜ ์ƒ˜ํ”Œ๋“ค์„ ์„ž๋Š” ๋ฐฉ๋ฒ•์„ ์ ์šฉํ•˜์˜€๊ณ , ์ด๋Š” ์ถ”๊ฐ€์ ์ธ ์—ฐ์‚ฐ์„ ์„ธ๋ฒˆ์—์„œ ๋‘๋ฒˆ์œผ๋กœ ์ค„์—ฌ ๋ฉ”๋ชจ๋ฆฌ์˜ ์†Œ๋ชจ๋ฅผ ์ค„์ผ ์ˆ˜ ์žˆ์—ˆ๋‹ค. ๋˜ํ•œ, ์ด ๋‘ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ํ•ฉ์ณ์„œ ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์ด ํ–ฅ์ƒ๋จ์„ ๋ณด์˜€๋‹ค.1 INTRODUCTION 1 2 Related works 12 3 Consistency-based Semi-supervised learning for object Detection (CSD) 42 4 Interpolation-based Semi-supervised learning for object Detection (ISD) 65 5 Combination of CSD and ISD 82 6 Conclusion 98 Abstract (In Korean) 116 ๊ฐ์‚ฌ์˜ ๊ธ€ 118๋ฐ•

    OV-VG: A Benchmark for Open-Vocabulary Visual Grounding

    Full text link
    Open-vocabulary learning has emerged as a cutting-edge research area, particularly in light of the widespread adoption of vision-based foundational models. Its primary objective is to comprehend novel concepts that are not encompassed within a predefined vocabulary. One key facet of this endeavor is Visual Grounding, which entails locating a specific region within an image based on a corresponding language description. While current foundational models excel at various visual language tasks, there's a noticeable absence of models specifically tailored for open-vocabulary visual grounding. This research endeavor introduces novel and challenging OV tasks, namely Open-Vocabulary Visual Grounding and Open-Vocabulary Phrase Localization. The overarching aim is to establish connections between language descriptions and the localization of novel objects. To facilitate this, we have curated a comprehensive annotated benchmark, encompassing 7,272 OV-VG images and 1,000 OV-PL images. In our pursuit of addressing these challenges, we delved into various baseline methodologies rooted in existing open-vocabulary object detection, VG, and phrase localization frameworks. Surprisingly, we discovered that state-of-the-art methods often falter in diverse scenarios. Consequently, we developed a novel framework that integrates two critical components: Text-Image Query Selection and Language-Guided Feature Attention. These modules are designed to bolster the recognition of novel categories and enhance the alignment between visual and linguistic information. Extensive experiments demonstrate the efficacy of our proposed framework, which consistently attains SOTA performance across the OV-VG task. Additionally, ablation studies provide further evidence of the effectiveness of our innovative models. Codes and datasets will be made publicly available at https://github.com/cv516Buaa/OV-VG
    • โ€ฆ
    corecore