94 research outputs found

    Online Robot Introspection via Wrench-based Action Grammars

    Full text link
    Robotic failure is all too common in unstructured robot tasks. Despite well-designed controllers, robots often fail due to unexpected events. How do robots measure unexpected events? Many do not. Most robots are driven by the sense-plan act paradigm, however more recently robots are undergoing a sense-plan-act-verify paradigm. In this work, we present a principled methodology to bootstrap online robot introspection for contact tasks. In effect, we are trying to enable the robot to answer the question: what did I do? Is my behavior as expected or not? To this end, we analyze noisy wrench data and postulate that the latter inherently contains patterns that can be effectively represented by a vocabulary. The vocabulary is generated by segmenting and encoding the data. When the wrench information represents a sequence of sub-tasks, we can think of the vocabulary forming a sentence (set of words with grammar rules) for a given sub-task; allowing the latter to be uniquely represented. The grammar, which can also include unexpected events, was classified in offline and online scenarios as well as for simulated and real robot experiments. Multiclass Support Vector Machines (SVMs) were used offline, while online probabilistic SVMs were are used to give temporal confidence to the introspection result. The contribution of our work is the presentation of a generalizable online semantic scheme that enables a robot to understand its high-level state whether nominal or abnormal. It is shown to work in offline and online scenarios for a particularly challenging contact task: snap assemblies. We perform the snap assembly in one-arm simulated and real one-arm experiments and a simulated two-arm experiment. This verification mechanism can be used by high-level planners or reasoning systems to enable intelligent failure recovery or determine the next most optima manipulation skill to be used.Comment: arXiv admin note: substantial text overlap with arXiv:1609.0494

    A Hybrid Brain-Computer Interface Using Motor Imagery and SSVEP Based on Convolutional Neural Network

    Full text link
    The key to electroencephalography (EEG)-based brain-computer interface (BCI) lies in neural decoding, and its accuracy can be improved by using hybrid BCI paradigms, that is, fusing multiple paradigms. However, hybrid BCIs usually require separate processing processes for EEG signals in each paradigm, which greatly reduces the efficiency of EEG feature extraction and the generalizability of the model. Here, we propose a two-stream convolutional neural network (TSCNN) based hybrid brain-computer interface. It combines steady-state visual evoked potential (SSVEP) and motor imagery (MI) paradigms. TSCNN automatically learns to extract EEG features in the two paradigms in the training process, and improves the decoding accuracy by 25.4% compared with the MI mode, and 2.6% compared with SSVEP mode in the test data. Moreover, the versatility of TSCNN is verified as it provides considerable performance in both single-mode (70.2% for MI, 93.0% for SSVEP) and hybrid-mode scenarios (95.6% for MI-SSVEP hybrid). Our work will facilitate the real-world applications of EEG-based BCI systems

    GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest

    Full text link
    Instruction tuning large language model (LLM) on image-text pairs has achieved unprecedented vision-language multimodal abilities. However, their vision-language alignments are only built on image-level, the lack of region-level alignment limits their advancements to fine-grained multimodal understanding. In this paper, we propose instruction tuning on region-of-interest. The key design is to reformulate the bounding box as the format of spatial instruction. The interleaved sequences of visual features extracted by the spatial instruction and the language embedding are input to LLM, and trained on the transformed region-text data in instruction tuning format. Our region-level vision-language model, termed as GPT4RoI, brings brand new conversational and interactive experience beyond image-level understanding. (1) Controllability: Users can interact with our model by both language and spatial instructions to flexibly adjust the detail level of the question. (2) Capacities: Our model supports not only single-region spatial instruction but also multi-region. This unlocks more region-level multimodal capacities such as detailed region caption and complex region reasoning. (3) Composition: Any off-the-shelf object detector can be a spatial instruction provider so as to mine informative object attributes from our model, like color, shape, material, action, relation to other objects, etc. The code, data, and demo can be found at https://github.com/jshilong/GPT4RoI.Comment: Code has been released at https://github.com/jshilong/GPT4Ro

    Sustainability in the Age of Platforms. CEPS Special Report. June 2019

    Get PDF
    Over the past few decades, new digital platforms such as China’s Alibaba, Japan’s Rakuten and the U.S.’s eBay have grown from startups into multinational giants. With a few clicks of the keyboard, these online marketplaces bring together a seller and a buyer from anywhere in the globe. This study examines the transformative impact of online marketplaces on economic, social and environmental sustainability. It finds great opportunities. Platforms promote growth, break down barriers of distance and leap over rigid class structures, bringing marginalized outsiders into the mainstream. The study also identifies dangers stemming from the growth of e-commerce, from the reduction of labor protection to an explosion of shipping waste. What are the responsibilities of platforms? How can they promote sustainability? Policymakers are asking these questions, but struggling to find the correct balance between the opportunities against the dangers. Until now, these questions have received little attention from scholars. This study fills a much-needed void by providing some initial answers and recommendations for improvement

    Maslinic acid potentiates the anti-tumor activity of tumor necrosis factor α by inhibiting NF-κB signaling pathway

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Tumor necrosis factor alpha (TNFα) has been used to treat certain tumors in clinic trials. However, the curative effect of TNFα has been undermined by the induced-NF-κB activation in many types of tumor. Maslinic acid (MA), a pharmacological safe natural product, has been known for its important effects as anti-oxidant, anti-inflammatory, and anti-viral activities. The aim of this study was to determine whether MA potentiates the anti-tumor activity of TNFα though the regulation of NF-κB activation.</p> <p>Results</p> <p>In this study, we demonstrate that MA significantly enhanced TNFα-induced inhibition of pancreatic cancer cell proliferation, invasion, and potentiated TNFα-induced cell apoptosis by suppressing TNFα-induced NF-κB activation in a dose- and time-dependent manner. Addition of MA inhibited TNFα-induced IκBα degradation, p65 phosphorylation, and nuclear translocation. Furthermore, MA decreased the expression levels of NF-κB-regulated genes, including genes involved in tumor cell proliferation (Cyclin D1, COX-2 and c-Myc), apoptosis (Survivin, Bcl-2, Bcl-xl, XIAP, IAP-1), invasion (MMP-9 and ICAM-1), and angiogenesis (VEGF). In athymic nu/nu mouse model, we further demonstrated that MA significantly suppressed pancreatic tumor growth, induced tumor apoptosis, and inhibited NF-κB-regulated anti-apoptotic gene expression, such as Survivin and Bcl-xl.</p> <p>Conclusions</p> <p>Our data demonstrate that MA can potentiate the anti-tumor activities of TNFα and inhibit pancreatic tumor growth and invasion by activating caspase-dependent apoptotic pathway and by suppressing NF-κB activation and its downstream gene expression. Therefore, MA together with TNFα could be new promising agents in the treatment of pancreatic cancer.</p

    An Experimental Study on the Establishment of Pulmonary Hypertension Model in Rats induced by Monocrotaline

    Get PDF
    Pulmonary hypertension is called PH for short. It is caused by the pulmonary artery vascular disease leading to pulmonary vascular resistance, and the increase right lung compartment load, which resulting in weakening or even collapse of the right ventricular function. The establishment of rat PH model under the action of monocrotaline is a repeatable, simple and accessible operation technique, which has been widely used in the treatment of pulmonary hypertension. This paper discusses the principle and properties of the PH model on rats under the monocrotaline action
    corecore