20 research outputs found

    Visualization of avian influenza virus infected cells using self-assembling fragments of green fluorescent protein

    Get PDF
    AbstractBackgroundAvian influenza viruses (AIVs) are influenza A viruses which are isolated from domestic and wild birds. AIVs that include highly pathogenic avian influenza viruses (HPAIVs) are a major concern to the poultry industry because they cause outbreaks in poultry with extraordinarily high lethality. In addition, AIVs threaten human health by occasional zoonotic infection of humans from birds. Tools to visualize AIV-infected cells would facilitate the development of diagnostic tests and preventative methods to reduce the spread of AIVs. In this study, a self-assembling split-green fluorescent protein (split-GFP) system, combined with influenza virus reverse genetics was used to construct a visualization method for influenza virus-infected cells.ResultsThe viral nucleoprotein (NP) segment of AIV was genetically modified to co-express GFP11 of self-assembling split-GFP, and the recombinant AIV with the modified NP segment was generated by plasmid-based reverse genetics. Infection with the recombinant AIV in cultured chicken cells was visualized by transient transfection with a GFP1-10 expression vector and fluorescence was observed in the cells at 96hours post-inoculation. Virus titer of the recombinant AIV in embryonated eggs was comparable to wild type AIV titers at 48h post inoculation. The inserted sequence encoding GFP11 was stable for up to ten passages in embryonated eggs.ConclusionsA visualization system for AIV-infected cells using split-GFP was developed. This method could be used to understand AIV infection dynamics in cells

    Visualization of avian influenza virus infected cells using self-assembling fragments of green fluorescent protein

    Get PDF
    Background: Avian influenza viruses (AIVs) are influenza A viruses which are isolated from domestic and wild birds. AIVs that include highly pathogenic avian influenza viruses (HPAIVs) are a major concern to the poultry industry because they cause outbreaks in poultry with extraordinarily high lethality. In addition, AIVs threaten human health by occasional zoonotic infection of humans from birds. Tools to visualize AIV-infected cells would facilitate the development of diagnostic tests and preventative methods to reduce the spread of AIVs. In this study, a self-assembling split-green fluorescent protein (split-GFP) system, combined with influenza virus reverse genetics was used to construct a visualization method for influenza virus-infected cells. Results: The viral nucleoprotein (NP) segment of AIV was genetically modified to co-express GFP11 of self-assembling split-GFP, and the recombinant AIV with the modified NP segment was generated by plasmid-based reverse genetics. Infection with the recombinant AIV in cultured chicken cells was visualized by transient transfection with a GFP1-10 expression vector and fluorescence was observed in the cells at 96 hours post-inoculation. Virus titer of the recombinant AIV in embryonated eggs was comparable to wild type AIV titers at 48 h post inoculation. The inserted sequence encoding GFP11 was stable for up to ten passages in embryonated eggs. Conclusions: A visualization system for AIV-infected cells using split-GFP was developed. This method could be used to understand AIV infection dynamics in cells

    GPT-4V(ision) for Robotics: Multimodal Task Planning from Human Demonstration

    Full text link
    We introduce a pipeline that enhances a general-purpose Vision Language Model, GPT-4V(ision), by integrating observations of human actions to facilitate robotic manipulation. This system analyzes videos of humans performing tasks and creates executable robot programs that incorporate affordance insights. The computation starts by analyzing the videos with GPT-4V to convert environmental and action details into text, followed by a GPT-4-empowered task planner. In the following analyses, vision systems reanalyze the video with the task plan. Object names are grounded using an open-vocabulary object detector, while focus on the hand-object relation helps to detect the moment of grasping and releasing. This spatiotemporal grounding allows the vision systems to further gather affordance data (e.g., grasp type, way points, and body postures). Experiments across various scenarios demonstrate this method's efficacy in achieving real robots' operations from human demonstrations in a zero-shot manner. The prompts of GPT-4V/GPT-4 are available at this project page: https://microsoft.github.io/GPT4Vision-Robot-Manipulation-Prompts/Comment: 8 pages, 10 figures, 1 table. Last updated on November 20th, 202

    Bias in Emotion Recognition with ChatGPT

    Full text link
    This technical report explores the ability of ChatGPT in recognizing emotions from text, which can be the basis of various applications like interactive chatbots, data annotation, and mental health analysis. While prior research has shown ChatGPT's basic ability in sentiment analysis, its performance in more nuanced emotion recognition is not yet explored. Here, we conducted experiments to evaluate its performance of emotion recognition across different datasets and emotion labels. Our findings indicate a reasonable level of reproducibility in its performance, with noticeable improvement through fine-tuning. However, the performance varies with different emotion labels and datasets, highlighting an inherent instability and possible bias. The choice of dataset and emotion labels significantly impacts ChatGPT's emotion recognition performance. This paper sheds light on the importance of dataset and label selection, and the potential of fine-tuning in enhancing ChatGPT's emotion recognition capabilities, providing a groundwork for better integration of emotion analysis in applications using ChatGPT.Comment: 5 pages, 4 figures, 6 table

    ChatGPT Empowered Long-Step Robot Control in Various Environments: A Case Application

    Full text link
    This paper demonstrates how OpenAI's ChatGPT can be used in a few-shot setting to convert natural language instructions into an executable robot action sequence. The paper proposes easy-to-customize input prompts for ChatGPT that meet common requirements in practical applications, such as easy integration with robot execution systems and applicability to various environments while minimizing the impact of ChatGPT's token limit. The prompts encourage ChatGPT to output a sequence of predefined robot actions, represent the operating environment in a formalized style, and infer the updated state of the operating environment. Experiments confirmed that the proposed prompts enable ChatGPT to act according to requirements in various environments, and users can adjust ChatGPT's output with natural language feedback for safe and robust operation. The proposed prompts and source code are open-source and publicly available at https://github.com/microsoft/ChatGPT-Robot-Manipulation-PromptsComment: 17 figures. Last updated April 11th, 202

    Interactive Task Encoding System for Learning-from-Observation

    Full text link
    We introduce a practical pipeline that interactively encodes multimodal human demonstrations for robot teaching. This pipeline is designed as an input system for a framework called Learning-from-Observation (LfO), which aims to program household robots with manipulative tasks through few-shots human demonstration without coding. While most previous LfO systems run with visual demonstration, recent research on robot teaching has shown the effectiveness of verbal instruction in making recognition robust and teaching interactive. To the best of our knowledge, however, no LfO system has yet been proposed that utilizes both verbal instruction and interaction, namely \textit{multimodal LfO}. This paper proposes the interactive task encoding system (ITES) as an input pipeline for multimodal LfO. ITES assumes that the user teaches step-by-step, pausing hand movements in order to match the granularity of human instructions with the granularity of robot execution. ITES recognizes tasks based on step-by-step verbal instructions that accompany the hand movements. Additionally, the recognition is made robust through interactions with the user. We test ITES on a real robot and show that the user can successfully teach multiple operations through multimodal demonstrations. The results suggest the usefulness of ITES for multimodal LfO. The source code is available at https://github.com/microsoft/symbolic-robot-teaching-interface.Comment: 7 pages, 10 figures. Last updated January 24st, 202

    GPT Models Meet Robotic Applications: Co-Speech Gesturing Chat System

    Full text link
    This technical paper introduces a chatting robot system that utilizes recent advancements in large-scale language models (LLMs) such as GPT-3 and ChatGPT. The system is integrated with a co-speech gesture generation system, which selects appropriate gestures based on the conceptual meaning of speech. Our motivation is to explore ways of utilizing the recent progress in LLMs for practical robotic applications, which benefits the development of both chatbots and LLMs. Specifically, it enables the development of highly responsive chatbot systems by leveraging LLMs and adds visual effects to the user interface of LLMs as an additional value. The source code for the system is available on GitHub for our in-house robot (https://github.com/microsoft/LabanotationSuite/tree/master/MSRAbotChatSimulation) and GitHub for Toyota HSR (https://github.com/microsoft/GPT-Enabled-HSR-CoSpeechGestures)

    Constraint-aware Policy for Compliant Manipulation

    Full text link
    Robot manipulation in a physically-constrained environment requires compliant manipulation. Compliant manipulation is a manipulation skill to adjust hand motion based on the force imposed by the environment. Recently, reinforcement learning (RL) has been applied to solve household operations involving compliant manipulation. However, previous RL methods have primarily focused on designing a policy for a specific operation that limits their applicability and requires separate training for every new operation. We propose a constraint-aware policy that is applicable to various unseen manipulations by grouping several manipulations together based on the type of physical constraint involved. The type of physical constraint determines the characteristic of the imposed force direction; thus, a generalized policy is trained in the environment and reward designed on the basis of this characteristic. This paper focuses on two types of physical constraints: prismatic and revolute joints. Experiments demonstrated that the same policy could successfully execute various compliant-manipulation operations, both in the simulation and reality. We believe this study is the first step toward realizing a generalized household-robot

    Phylogenetic analysis of avian paramyxovirus serotype-1 in pigeons in Japan

    No full text
    corecore