59 research outputs found

    CX3CR1+ interstitial dendritic cells form a contiguous network throughout the entire kidney

    Get PDF
    Dendritic cells (DCs) interface innate and adaptive immunity in nonlymphoid organs; however, the exact distribution and types of DC within the kidney are not known. We utilized CX3CR1GFP/+ mice to characterize the anatomy and phenotype of tissue-resident CX3CR1+ DCs within normal kidney. Laser-scanning confocal microscopy revealed an extensive, contiguous network of stellate-shaped CX3CR1+ DCs throughout the interstitial and mesangial spaces of the entire kidney. Intravital microscopy of the superficial cortex showed stationary interstitial CX3CR1+ DCs that continually probe the surrounding tissue environment through dendrite extensions. Flow cytometry of renal CX3CR1+ DCs showed significant coexpression of CD11c and F4/80, high major histocompatibility complex class II and FcR expression, and immature costimulatory but competent phagocytic ability indicative of tissue-resident, immature DCs ready to respond to environment cues. Thus, within the renal parenchyma, there exists little immunological privilege from the surveillance provided by renal CX3CR1+ DCs, a major constituent of the heterogeneous mononuclear phagocyte system populating normal kidney

    Learning Mazes with Aliasing States: An LCS Algorithm with Associative Perception

    Get PDF
    Learning classifier systems (LCSs) belong to a class of algorithms based on the principle of self-organization and have frequently been applied to the task of solving mazes, an important type of reinforcement learning (RL) problem. Maze problems represent a simplified virtual model of real environments that can be used for developing core algorithms of many real-world applications related to the problem of navigation. However, the best achievements of LCSs in maze problems are still mostly bounded to non-aliasing environments, while LCS complexity seems to obstruct a proper analysis of the reasons of failure. We construct a new LCS agent that has a simpler and more transparent performance mechanism, but that can still solve mazes better than existing algorithms. We use the structure of a predictive LCS model, strip out the evolutionary mechanism, simplify the reinforcement learning procedure and equip the agent with the ability of associative perception, adopted from psychology. To improve our understanding of the nature and structure of maze environments, we analyze mazes used in research for the last two decades, introduce a set of maze complexity characteristics, and develop a set of new maze environments. We then run our new LCS with associative perception through the old and new aliasing mazes, which represent partially observable Markov decision problems (POMDP) and demonstrate that it performs at least as well as, and in some cases better than, other published systems

    Cross-Lingual Semantic Similarity Measure for Comparable Articles

    Get PDF
    International audienceWe aim in this research to find and compare crosslingual articles concerning a specific topic. So, we need measure for that. This measure can be based on bilingual dictionaries or based on numerical methods such as Latent Semantic Indexing (LSI). In this paper, we use the LSI in two ways to retrieve Arabic-English comparable articles. The first one is monolingual: the English article is translated into Arabic and then mapped into the Arabic LSI space; the second one is crosslingual: Arabic and English documents are mapped into Arabic-English LSI space. Then, we compare LSI approaches to the dictionary-based approach on several English-Arabic parallel and comparable corpora. Results indicate that the performance of cross-lingual LSI approach is competitive to monolingual approach, or even better for some corpora. Moreover, both LSI approaches outperform the dictionary approach

    APRIL: Active Preference-learning based Reinforcement Learning

    Get PDF
    This paper focuses on reinforcement learning (RL) with limited prior knowledge. In the domain of swarm robotics for instance, the expert can hardly design a reward function or demonstrate the target behavior, forbidding the use of both standard RL and inverse reinforcement learning. Although with a limited expertise, the human expert is still often able to emit preferences and rank the agent demonstrations. Earlier work has presented an iterative preference-based RL framework: expert preferences are exploited to learn an approximate policy return, thus enabling the agent to achieve direct policy search. Iteratively, the agent selects a new candidate policy and demonstrates it; the expert ranks the new demonstration comparatively to the previous best one; the expert's ranking feedback enables the agent to refine the approximate policy return, and the process is iterated. In this paper, preference-based reinforcement learning is combined with active ranking in order to decrease the number of ranking queries to the expert needed to yield a satisfactory policy. Experiments on the mountain car and the cancer treatment testbeds witness that a couple of dozen rankings enable to learn a competent policy

    Reusing risk-aware stochastic abstract policies in robotic navigation learning

    Get PDF
    In this paper we improve learning performance of a risk-aware robot facing navigation tasks by employing transfer learning; that is, we use information from a previously solved task to accelerate learning in a new task. To do so, we transfer risk-aware memoryless stochastic abstract policies into a new task. We show how to incorporate risk-awareness into robotic navigation tasks, in particular when tasks are modeled as stochastic shortest path problems. We then show how to use a modified policy iteration algorithm, called AbsProb-PI, to obtain risk-neutral and risk-prone memoryless stochastic abstract policies. Finally, we propose a method that combines abstract policies, and show how to use the combined policy in a new navigation task. Experiments validate our proposals and show that one can find effective abstract policies that can improve robot behavior in navigation problem
    corecore