115 research outputs found

    CleanNet: Transfer Learning for Scalable Image Classifier Training with Label Noise

    Full text link
    In this paper, we study the problem of learning image classification models with label noise. Existing approaches depending on human supervision are generally not scalable as manually identifying correct or incorrect labels is time-consuming, whereas approaches not relying on human supervision are scalable but less effective. To reduce the amount of human supervision for label noise cleaning, we introduce CleanNet, a joint neural embedding network, which only requires a fraction of the classes being manually verified to provide the knowledge of label noise that can be transferred to other classes. We further integrate CleanNet and conventional convolutional neural network classifier into one framework for image classification learning. We demonstrate the effectiveness of the proposed algorithm on both of the label noise detection task and the image classification on noisy data task on several large-scale datasets. Experimental results show that CleanNet can reduce label noise detection error rate on held-out classes where no human supervision available by 41.5% compared to current weakly supervised methods. It also achieves 47% of the performance gain of verifying all images with only 3.2% images verified on an image classification task. Source code and dataset will be available at kuanghuei.github.io/CleanNetProject.Comment: Accepted to CVPR 201

    Green tea extract supplementation ameliorates CCl4-induced hepatic oxidative stress, fibrosis, and acute-phase protein expression in rat

    Get PDF
    Background/PurposeWe evaluated the long-term effects of green tea extract (GTE) supplementation on oxidative stress, biliary acute phase protein expression, and liver function in CCl4-induced chronic liver injury.MethodsWe evaluated the antioxidant activity of GTE in comparison with those of vitamin C, vitamin E, and β-carotene in vitro by using an ultrasensitive chemiluminescence analyzer. Chronic liver injury was induced by intraperitoneally administering carbon tetrachloride (CCl4) (1mL/kg body weight, twice weekly) to female Wistar rats for 8 weeks. The effects of low (4mg/kg body weight per day) and high (20mg/kg body weight per day) doses of intragastric GTE on CCl4-induced liver dysfunction and fibrosis were examined by measuring the bile and blood reactive oxygen species levels and biochemical parameters by using Western blot and two-dimensional polyacrylamide gel electrophoresis techniques.ResultsGTE has greater scavenging activity against O2–, H2O2, and Hypochlorous acid (HOCl) in vitro than vitamin C, vitamin E, and β-carotene do. In vivo, CCl4 markedly increased bile and blood reactive oxygen species production, lipid accumulation, number of infiltrated leukocytes, fibrosis, hepatic hydroxyproline content, and plasma alanine aminotransferase and aspartate aminotransferase activities, and reduced plasma albumin levels. Two-dimensional polyacrylamide gel electrophoresis revealed that CCl4 increased the acute-phase expression of six biliary proteins and decreased hepatic B-cell lymphoma 2 (Bcl-2), catalase, and CuZn superoxide dismutase protein expression. GTE supplementation attenuated CCl4-enhanced oxidative stress, levels of biochemical parameters, pathology, and acute-phase protein secretion, and preserved antioxidant/antiapoptotic protein expression.ConclusionGTE supplementation attenuates CCl4-induced hepatic oxidative stress, fibrosis, acute phase protein excretion, and hepatic dysfunction via the antioxidant and antiapoptotic defense mechanisms

    SERPINE2, an inhibitor of plasminogen activators, is highly expressed in the human endometrium during the secretory phase

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>SERPINE2, also known as protease nexin-1, belongs to the serine protease inhibitor (SERPIN) superfamily. It is one of the potent SERPINs that modulates the activity of plasminogen activators (PAs). PAs and their SERPIN inhibitors, such as SERPINB2 and SERPINE1, were expressed in the human endometrium and were implicated in implantation. However, expression data about SERPINE2 in the human endometrium is still unknown. Thus, we conducted an investigation to reveal the spatiotemporal and cellular expression of SERPINE2 in the human uterus during the menstrual cycle.</p> <p>Methods</p> <p>Seven patients who underwent a hysterectomy and samples of 120 archived patients' endometrial curettage or parts of the uterus that were formalin-fixed and embedded in paraffin. Western blotting was performed to evaluate the specificity and sensitivity of the antibody. Immunohistochemistry was conducted to localize the SERPINE2 expression site. Quantitative analysis was conducted to evaluate expression levels of SERPINE2 in various sub-phases of the menstrual cycle.</p> <p>Results</p> <p>The SERPINE2 protein was primarily detected in the uterine fluid during the mid- and late-secretory phases of the menstrual cycle. It was predominantly expressed in the luminal and glandular epithelium, less in the myometrium, and only dispersedly in certain stromal cells throughout the menstrual cycle. A quantitative analysis of expression levels of SERPINE2 in the glandular epithelium revealed that it was highly expressed in the endometrium during the secretory phase compared to the proliferative phase.</p> <p>Conclusions</p> <p>The SERPINE2 protein is highly expressed in the endometrium during the secretory phase, indicating that it may participate in tissue remodeling involved in implantation.</p

    Open-World Object Manipulation using Pre-trained Vision-Language Models

    Full text link
    For robots to follow instructions from people, they must be able to connect the rich semantic information in human vocabulary, e.g. "can you get me the pink stuffed whale?" to their sensory observations and actions. This brings up a notably difficult challenge for robots: while robot learning approaches allow robots to learn many different behaviors from first-hand experience, it is impractical for robots to have first-hand experiences that span all of this semantic information. We would like a robot's policy to be able to perceive and pick up the pink stuffed whale, even if it has never seen any data interacting with a stuffed whale before. Fortunately, static data on the internet has vast semantic information, and this information is captured in pre-trained vision-language models. In this paper, we study whether we can interface robot policies with these pre-trained models, with the aim of allowing robots to complete instructions involving object categories that the robot has never seen first-hand. We develop a simple approach, which we call Manipulation of Open-World Objects (MOO), which leverages a pre-trained vision-language model to extract object-identifying information from the language command and image, and conditions the robot policy on the current image, the instruction, and the extracted object information. In a variety of experiments on a real mobile manipulator, we find that MOO generalizes zero-shot to a wide range of novel object categories and environments. In addition, we show how MOO generalizes to other, non-language-based input modalities to specify the object of interest such as finger pointing, and how it can be further extended to enable open-world navigation and manipulation. The project's website and evaluation videos can be found at https://robot-moo.github.io/Comment: Accepted at the 7th Conference on Robot Learning (CoRL 2023

    Optimization of the epoxidation of methyl ester of palm fatty acid distillate

    Get PDF
    Methyl ester of palm fatty acid distillate (PFAD-ME) can be used for producing epoxide compounds. PFADME consists of 39.3% of oleic acid (C18:1) and has an iodine value of 49.2 g I2/100 g. It can be converted to a low oxirane content epoxide which can be used for several applications, such as plasticizers, polyols or alkanolamines, with appropriate modification. Temperature, mole ratio of hydrogen peroxide to unsaturation, and mole ratio of formic acid to unsaturation were optimized in the epoxidation of PFAD-ME. The study showed that more than 98% conversion of unsaturation to the epoxide ring moiety can be achieved within 3 hr of reaction by using the optimum molar ratio of 1:1:4 (unsaturation: formic acid: hydrogen peroxide) and a temperature of 50°C

    DeVLBert: Learning Deconfounded Visio-Linguistic Representations

    Full text link
    In this paper, we propose to investigate the problem of out-of-domain visio-linguistic pretraining, where the pretraining data distribution differs from that of downstream data on which the pretrained model will be fine-tuned. Existing methods for this problem are purely likelihood-based, leading to the spurious correlations and hurt the generalization ability when transferred to out-of-domain downstream tasks. By spurious correlation, we mean that the conditional probability of one token (object or word) given another one can be high (due to the dataset biases) without robust (causal) relationships between them. To mitigate such dataset biases, we propose a Deconfounded Visio-Linguistic Bert framework, abbreviated as DeVLBert, to perform intervention-based learning. We borrow the idea of the backdoor adjustment from the research field of causality and propose several neural-network based architectures for Bert-style out-of-domain pretraining. The quantitative results on three downstream tasks, Image Retrieval (IR), Zero-shot IR, and Visual Question Answering, show the effectiveness of DeVLBert by boosting generalization ability.Comment: 10 pages, 4 figures, to appear in ACM MM 2020 proceeding

    Language to Rewards for Robotic Skill Synthesis

    Full text link
    Large language models (LLMs) have demonstrated exciting progress in acquiring diverse new capabilities through in-context learning, ranging from logical reasoning to code-writing. Robotics researchers have also explored using LLMs to advance the capabilities of robotic control. However, since low-level robot actions are hardware-dependent and underrepresented in LLM training corpora, existing efforts in applying LLMs to robotics have largely treated LLMs as semantic planners or relied on human-engineered control primitives to interface with the robot. On the other hand, reward functions are shown to be flexible representations that can be optimized for control policies to achieve diverse tasks, while their semantic richness makes them suitable to be specified by LLMs. In this work, we introduce a new paradigm that harnesses this realization by utilizing LLMs to define reward parameters that can be optimized and accomplish variety of robotic tasks. Using reward as the intermediate interface generated by LLMs, we can effectively bridge the gap between high-level language instructions or corrections to low-level robot actions. Meanwhile, combining this with a real-time optimizer, MuJoCo MPC, empowers an interactive behavior creation experience where users can immediately observe the results and provide feedback to the system. To systematically evaluate the performance of our proposed method, we designed a total of 17 tasks for a simulated quadruped robot and a dexterous manipulator robot. We demonstrate that our proposed method reliably tackles 90% of the designed tasks, while a baseline using primitive skills as the interface with Code-as-policies achieves 50% of the tasks. We further validated our method on a real robot arm where complex manipulation skills such as non-prehensile pushing emerge through our interactive system.Comment: https://language-to-reward.github.io
    corecore