337 research outputs found

    Rationale and Methodology of Reprogramming for Generation of Induced Pluripotent Stem Cells and Induced Neural Progenitor Cells.

    Get PDF
    Great progress has been made regarding the capabilities to modify somatic cell fate ever since the technology for generation of induced pluripotent stem cells (iPSCs) was discovered in 2006. Later, induced neural progenitor cells (iNPCs) were generated from mouse and human cells, bypassing some of the concerns and risks of using iPSCs in neuroscience applications. To overcome the limitation of viral vector induced reprogramming, bioactive small molecules (SM) have been explored to enhance the efficiency of reprogramming or even replace transcription factors (TFs), making the reprogrammed cells more amenable to clinical application. The chemical induced reprogramming process is a simple process from a technical perspective, but the choice of SM at each step is vital during the procedure. The mechanisms underlying cell transdifferentiation are still poorly understood, although, several experimental data and insights have indicated the rationale of cell reprogramming. The process begins with the forced expression of specific TFs or activation/inhibition of cell signaling pathways by bioactive chemicals in defined culture condition, which initiates the further reactivation of endogenous gene program and an optimal stoichiometric expression of the endogenous pluri- or multi-potency genes, and finally leads to the birth of reprogrammed cells such as iPSCs and iNPCs. In this review, we first outline the rationale and discuss the methodology of iPSCs and iNPCs in a stepwise manner; and then we also discuss the chemical-based reprogramming of iPSCs and iNPCs

    Investigation of relaxation factor in landweber iterative algorithm for electrical capacitance tomography

    Get PDF
    It is crucial to select a suitable relaxation factor in iterative image reconstruction algorithms (e.g. Landweber iterative algorithm) for electrical capacitance tomography (ECT) because it affects the convergence. By simulation, it is found notably that the relaxation factor should be selected adaptively according to the sensor structure (e.g. the number of electrodes) and the permittivity distribution in capacitance measurements. With different number of electrodes and four typical permittivity distributions, the relaxation factor and the related convergence are investigated in consideration of the change in relative image error. It is shown that the relaxation factor can be chosen based on the upper boundary of all relaxation factors. The conclusions in this paper can be used for practical industrial processes, regarding the adaptive selection of relaxation factor and the number of iterations needed

    A MAPAEKF-SLAM ALGORITHM WITH RECURSIVE MEAN AND COVARIANCE OF PROCESS AND MEASUREMENT NOISE STATISTIC

    Get PDF
    The most popular filtering method used for solving a Simultaneous Localization and Mapping is the Extended Kalman Filter. Essentially, it requires prior stochastic knowledge both the process and measurement noise statistic. In order to avoid this requirement, these noise statistics have been defined at the beginning and kept to be fixed for the whole process. Indeed, it will satisfy the desired robustness in the case of simulation. Oppositely, due to the continuous uncertainty affected by the dynamic system under time integration, this manner is strongly not recommended. The reason is, improperly defined noise will not only degrade the filter performance but also might lead the filter to divergence condition. For this reason, there has been a strong manner well-termed as an adaptive-based strategy that commonly used to equip the classical filter for having an ability to approximate the noise statistic. Of course, by knowing the closely responsive noise statistic, the robustness and accuracy of an EKF can increase. However, most of the existed Adaptive-EKF only considered that the process and measurement noise statistic are characteristically zero-mean and responsive covariances. Accordingly, the robustness of EKF can still be enhanced. This paper presents a proposed method named as a MAPAEKF-SLAM algorithm used for solving the SLAM problem of a mobile robot, Turtlebot2. Sequentially, a classical EKF was estimated using Maximum a Posteriori. However, due to the existence of unobserved value, EKF was also smoothed one time based on the fixed-interval smoothing method. This smoothing step aims to keep-up the derivation process under MAP creation. Realistically, this proposed method was simulated and compared to the conventional one. Finally, it has been showing better accuracy in terms of Root Mean Square Error (RMSE) of both Estimated Map Coordinate (EMC) and Estimated Path Coordinate (EPC).     

    Generalized Category Discovery with Decoupled Prototypical Network

    Full text link
    Generalized Category Discovery (GCD) aims to recognize both known and novel categories from a set of unlabeled data, based on another dataset labeled with only known categories. Without considering differences between known and novel categories, current methods learn about them in a coupled manner, which can hurt model's generalization and discriminative ability. Furthermore, the coupled training approach prevents these models transferring category-specific knowledge explicitly from labeled data to unlabeled data, which can lose high-level semantic information and impair model performance. To mitigate above limitations, we present a novel model called Decoupled Prototypical Network (DPN). By formulating a bipartite matching problem for category prototypes, DPN can not only decouple known and novel categories to achieve different training targets effectively, but also align known categories in labeled and unlabeled data to transfer category-specific knowledge explicitly and capture high-level semantics. Furthermore, DPN can learn more discriminative features for both known and novel categories through our proposed Semantic-aware Prototypical Learning (SPL). Besides capturing meaningful semantic information, SPL can also alleviate the noise of hard pseudo labels through semantic-weighted soft assignment. Extensive experiments show that DPN outperforms state-of-the-art models by a large margin on all evaluation metrics across multiple benchmark datasets. Code and data are available at https://github.com/Lackel/DPN.Comment: Accepted by AAAI 202

    Calibration-based Dual Prototypical Contrastive Learning Approach for Domain Generalization Semantic Segmentation

    Full text link
    Prototypical contrastive learning (PCL) has been widely used to learn class-wise domain-invariant features recently. These methods are based on the assumption that the prototypes, which are represented as the central value of the same class in a certain domain, are domain-invariant. Since the prototypes of different domains have discrepancies as well, the class-wise domain-invariant features learned from the source domain by PCL need to be aligned with the prototypes of other domains simultaneously. However, the prototypes of the same class in different domains may be different while the prototypes of different classes may be similar, which may affect the learning of class-wise domain-invariant features. Based on these observations, a calibration-based dual prototypical contrastive learning (CDPCL) approach is proposed to reduce the domain discrepancy between the learned class-wise features and the prototypes of different domains for domain generalization semantic segmentation. It contains an uncertainty-guided PCL (UPCL) and a hard-weighted PCL (HPCL). Since the domain discrepancies of the prototypes of different classes may be different, we propose an uncertainty probability matrix to represent the domain discrepancies of the prototypes of all the classes. The UPCL estimates the uncertainty probability matrix to calibrate the weights of the prototypes during the PCL. Moreover, considering that the prototypes of different classes may be similar in some circumstances, which means these prototypes are hard-aligned, the HPCL is proposed to generate a hard-weighted matrix to calibrate the weights of the hard-aligned prototypes during the PCL. Extensive experiments demonstrate that our approach achieves superior performance over current approaches on domain generalization semantic segmentation tasks.Comment: Accepted by ACM MM'2

    A Diffusion Weighted Graph Framework for New Intent Discovery

    Full text link
    New Intent Discovery (NID) aims to recognize both new and known intents from unlabeled data with the aid of limited labeled data containing only known intents. Without considering structure relationships between samples, previous methods generate noisy supervisory signals which cannot strike a balance between quantity and quality, hindering the formation of new intent clusters and effective transfer of the pre-training knowledge. To mitigate this limitation, we propose a novel Diffusion Weighted Graph Framework (DWGF) to capture both semantic similarities and structure relationships inherent in data, enabling more sufficient and reliable supervisory signals. Specifically, for each sample, we diffuse neighborhood relationships along semantic paths guided by the nearest neighbors for multiple hops to characterize its local structure discriminately. Then, we sample its positive keys and weigh them based on semantic similarities and local structures for contrastive learning. During inference, we further propose Graph Smoothing Filter (GSF) to explicitly utilize the structure relationships to filter high-frequency noise embodied in semantically ambiguous samples on the cluster boundary. Extensive experiments show that our method outperforms state-of-the-art models on all evaluation metrics across multiple benchmark datasets. Code and data are available at https://github.com/yibai-shi/DWGF.Comment: EMNLP 2023 Mai

    ZS4C: Zero-Shot Synthesis of Compilable Code for Incomplete Code Snippets using ChatGPT

    Full text link
    Technical question and answering (Q&A) sites such as Stack Overflow have become an important source for software developers to seek knowledge. However, code snippets on Q&A sites are usually uncompilable and semantically incomplete for compilation due to unresolved types and missing dependent libraries, which raises the obstacle for users to reuse or analyze Q&A code snippets. Prior approaches either are not designed for synthesizing compilable code or suffer from a low compilation success rate. To address this problem, we propose ZS4C, a lightweight approach to perform zero-shot synthesis of compilable code from incomplete code snippets using Large Language Model (LLM). ZS4C operates in two stages. In the first stage, ZS4C utilizes an LLM, i.e., ChatGPT, to identify missing import statements for a given code snippet, leveraging our designed task-specific prompt template. In the second stage, ZS4C fixes compilation errors caused by incorrect import statements and syntax errors through collaborative work between ChatGPT and a compiler. We thoroughly evaluated ZS4C on a widely used benchmark called StatType-SO against the SOTA approach SnR. Compared with SnR, ZS4C improves the compilation rate from 63% to 87.6%, with a 39.3% improvement. On average, ZS4C can infer more accurate import statements than SnR, with an improvement of 6.6% in the F1

    Maximum likelihood estimation-assisted ASVSF through state covariance-based 2D SLAM algorithm

    Get PDF
    The smooth variable structure filter (ASVSF) has been relatively considered as a new robust predictor-corrector method for estimating the state. In order to effectively utilize it, an SVSF requires the accurate system model, and exact prior knowledge includes both the process and measurement noise statistic. Unfortunately, the system model is always inaccurate because of some considerations avoided at the beginning. Moreover, the small addictive noises are partially known or even unknown. Of course, this limitation can degrade the performance of SVSF or also lead to divergence condition. For this reason, it is proposed through this paper an adaptive smooth variable structure filter (ASVSF) by conditioning the probability density function of a measurementto the unknown parameters at one iteration. This proposed method is assumed to accomplish the localization and direct point-based observation task of a wheeled mobile robot, TurtleBot2. Finally, by realistically simulating it and comparing to a conventional method, the proposed method has been showing a better accuracy and stability in term of root mean square error (RMSE) of the estimated map coordinate (EMC) and estimated path coordinate (EPC)
    corecore